00:00:00.000  Started by upstream project "autotest-nightly" build number 4361
00:00:00.000  originally caused by:
00:00:00.001   Started by upstream project "nightly-trigger" build number 3724
00:00:00.001   originally caused by:
00:00:00.001    Started by timer
00:00:00.156  Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy
00:00:00.157  The recommended git tool is: git
00:00:00.157  using credential 00000000-0000-0000-0000-000000000002
00:00:00.159   > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10
00:00:00.202  Fetching changes from the remote Git repository
00:00:00.204   > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10
00:00:00.239  Using shallow fetch with depth 1
00:00:00.239  Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool
00:00:00.239   > git --version # timeout=10
00:00:00.269   > git --version # 'git version 2.39.2'
00:00:00.269  using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials
00:00:00.284  Setting http proxy: proxy-dmz.intel.com:911
00:00:00.284   > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5
00:00:08.597   > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10
00:00:08.608   > git rev-parse FETCH_HEAD^{commit} # timeout=10
00:00:08.622  Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD)
00:00:08.622   > git config core.sparsecheckout # timeout=10
00:00:08.633   > git read-tree -mu HEAD # timeout=10
00:00:08.648   > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5
00:00:08.671  Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag"
00:00:08.671   > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10
00:00:08.750  [Pipeline] Start of Pipeline
00:00:08.760  [Pipeline] library
00:00:08.761  Loading library shm_lib@master
00:00:08.761  Library shm_lib@master is cached. Copying from home.
00:00:08.775  [Pipeline] node
00:00:08.786  Running on WFP21 in /var/jenkins/workspace/nvmf-phy-autotest
00:00:08.787  [Pipeline] {
00:00:08.793  [Pipeline] catchError
00:00:08.794  [Pipeline] {
00:00:08.801  [Pipeline] wrap
00:00:08.807  [Pipeline] {
00:00:08.811  [Pipeline] stage
00:00:08.813  [Pipeline] { (Prologue)
00:00:08.997  [Pipeline] sh
00:00:09.277  + logger -p user.info -t JENKINS-CI
00:00:09.294  [Pipeline] echo
00:00:09.296  Node: WFP21
00:00:09.302  [Pipeline] sh
00:00:09.603  [Pipeline] setCustomBuildProperty
00:00:09.616  [Pipeline] echo
00:00:09.618  Cleanup processes
00:00:09.623  [Pipeline] sh
00:00:09.907  + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk
00:00:09.907  3023827 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk
00:00:09.920  [Pipeline] sh
00:00:10.204  ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk
00:00:10.204  ++ grep -v 'sudo pgrep'
00:00:10.204  ++ awk '{print $1}'
00:00:10.204  + sudo kill -9
00:00:10.204  + true
00:00:10.220  [Pipeline] cleanWs
00:00:10.230  [WS-CLEANUP] Deleting project workspace...
00:00:10.230  [WS-CLEANUP] Deferred wipeout is used...
00:00:10.237  [WS-CLEANUP] done
00:00:10.241  [Pipeline] setCustomBuildProperty
00:00:10.258  [Pipeline] sh
00:00:10.540  + sudo git config --global --replace-all safe.directory '*'
00:00:10.663  [Pipeline] httpRequest
00:00:11.393  [Pipeline] echo
00:00:11.394  Sorcerer 10.211.164.20 is alive
00:00:11.404  [Pipeline] retry
00:00:11.406  [Pipeline] {
00:00:11.420  [Pipeline] httpRequest
00:00:11.424  HttpMethod: GET
00:00:11.425  URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:11.425  Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:11.438  Response Code: HTTP/1.1 200 OK
00:00:11.438  Success: Status code 200 is in the accepted range: 200,404
00:00:11.439  Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:18.999  [Pipeline] }
00:00:19.016  [Pipeline] // retry
00:00:19.024  [Pipeline] sh
00:00:19.309  + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:19.326  [Pipeline] httpRequest
00:00:19.735  [Pipeline] echo
00:00:19.737  Sorcerer 10.211.164.20 is alive
00:00:19.747  [Pipeline] retry
00:00:19.749  [Pipeline] {
00:00:19.764  [Pipeline] httpRequest
00:00:19.768  HttpMethod: GET
00:00:19.769  URL: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz
00:00:19.769  Sending request to url: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz
00:00:19.774  Response Code: HTTP/1.1 200 OK
00:00:19.775  Success: Status code 200 is in the accepted range: 200,404
00:00:19.775  Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz
00:01:30.035  [Pipeline] }
00:01:30.054  [Pipeline] // retry
00:01:30.062  [Pipeline] sh
00:01:30.350  + tar --no-same-owner -xf spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz
00:01:32.903  [Pipeline] sh
00:01:33.188  + git -C spdk log --oneline -n5
00:01:33.188  e01cb43b8 mk/spdk.common.mk sed the minor version
00:01:33.188  d58eef2a2 nvme/rdma: Fix reinserting qpair in connecting list after stale state
00:01:33.188  2104eacf0 test/check_so_deps: use VERSION to look for prior tags
00:01:33.188  66289a6db build: use VERSION file for storing version
00:01:33.188  626389917 nvme/rdma: Don't limit max_sge if UMR is used
00:01:33.200  [Pipeline] }
00:01:33.213  [Pipeline] // stage
00:01:33.222  [Pipeline] stage
00:01:33.225  [Pipeline] { (Prepare)
00:01:33.244  [Pipeline] writeFile
00:01:33.259  [Pipeline] sh
00:01:33.545  + logger -p user.info -t JENKINS-CI
00:01:33.558  [Pipeline] sh
00:01:33.843  + logger -p user.info -t JENKINS-CI
00:01:33.855  [Pipeline] sh
00:01:34.138  + cat autorun-spdk.conf
00:01:34.138  SPDK_RUN_FUNCTIONAL_TEST=1
00:01:34.138  SPDK_TEST_NVMF=1
00:01:34.138  SPDK_TEST_NVME_CLI=1
00:01:34.138  SPDK_TEST_NVMF_NICS=mlx5
00:01:34.138  SPDK_RUN_ASAN=1
00:01:34.138  SPDK_RUN_UBSAN=1
00:01:34.138  NET_TYPE=phy
00:01:34.146  RUN_NIGHTLY=1
00:01:34.151  [Pipeline] readFile
00:01:34.175  [Pipeline] withEnv
00:01:34.177  [Pipeline] {
00:01:34.189  [Pipeline] sh
00:01:34.475  + set -ex
00:01:34.475  + [[ -f /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf ]]
00:01:34.475  + source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf
00:01:34.475  ++ SPDK_RUN_FUNCTIONAL_TEST=1
00:01:34.475  ++ SPDK_TEST_NVMF=1
00:01:34.475  ++ SPDK_TEST_NVME_CLI=1
00:01:34.475  ++ SPDK_TEST_NVMF_NICS=mlx5
00:01:34.475  ++ SPDK_RUN_ASAN=1
00:01:34.475  ++ SPDK_RUN_UBSAN=1
00:01:34.475  ++ NET_TYPE=phy
00:01:34.475  ++ RUN_NIGHTLY=1
00:01:34.475  + case $SPDK_TEST_NVMF_NICS in
00:01:34.475  + DRIVERS=mlx5_ib
00:01:34.475  + [[ -n mlx5_ib ]]
00:01:34.475  + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4
00:01:34.475  rmmod: ERROR: Module mlx4_ib is not currently loaded
00:01:41.053  rmmod: ERROR: Module irdma is not currently loaded
00:01:41.053  rmmod: ERROR: Module i40iw is not currently loaded
00:01:41.053  rmmod: ERROR: Module iw_cxgb4 is not currently loaded
00:01:41.053  + true
00:01:41.053  + for D in $DRIVERS
00:01:41.053  + sudo modprobe mlx5_ib
00:01:41.053  + exit 0
00:01:41.064  [Pipeline] }
00:01:41.080  [Pipeline] // withEnv
00:01:41.085  [Pipeline] }
00:01:41.099  [Pipeline] // stage
00:01:41.109  [Pipeline] catchError
00:01:41.110  [Pipeline] {
00:01:41.125  [Pipeline] timeout
00:01:41.125  Timeout set to expire in 1 hr 0 min
00:01:41.127  [Pipeline] {
00:01:41.141  [Pipeline] stage
00:01:41.143  [Pipeline] { (Tests)
00:01:41.158  [Pipeline] sh
00:01:41.444  + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-phy-autotest
00:01:41.444  ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest
00:01:41.444  + DIR_ROOT=/var/jenkins/workspace/nvmf-phy-autotest
00:01:41.444  + [[ -n /var/jenkins/workspace/nvmf-phy-autotest ]]
00:01:41.444  + DIR_SPDK=/var/jenkins/workspace/nvmf-phy-autotest/spdk
00:01:41.444  + DIR_OUTPUT=/var/jenkins/workspace/nvmf-phy-autotest/output
00:01:41.444  + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/spdk ]]
00:01:41.444  + [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/output ]]
00:01:41.444  + mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/output
00:01:41.444  + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/output ]]
00:01:41.444  + [[ nvmf-phy-autotest == pkgdep-* ]]
00:01:41.444  + cd /var/jenkins/workspace/nvmf-phy-autotest
00:01:41.444  + source /etc/os-release
00:01:41.444  ++ NAME='Fedora Linux'
00:01:41.444  ++ VERSION='39 (Cloud Edition)'
00:01:41.444  ++ ID=fedora
00:01:41.444  ++ VERSION_ID=39
00:01:41.444  ++ VERSION_CODENAME=
00:01:41.444  ++ PLATFORM_ID=platform:f39
00:01:41.444  ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)'
00:01:41.444  ++ ANSI_COLOR='0;38;2;60;110;180'
00:01:41.444  ++ LOGO=fedora-logo-icon
00:01:41.444  ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39
00:01:41.444  ++ HOME_URL=https://fedoraproject.org/
00:01:41.444  ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/
00:01:41.444  ++ SUPPORT_URL=https://ask.fedoraproject.org/
00:01:41.444  ++ BUG_REPORT_URL=https://bugzilla.redhat.com/
00:01:41.444  ++ REDHAT_BUGZILLA_PRODUCT=Fedora
00:01:41.444  ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39
00:01:41.444  ++ REDHAT_SUPPORT_PRODUCT=Fedora
00:01:41.444  ++ REDHAT_SUPPORT_PRODUCT_VERSION=39
00:01:41.444  ++ SUPPORT_END=2024-11-12
00:01:41.444  ++ VARIANT='Cloud Edition'
00:01:41.444  ++ VARIANT_ID=cloud
00:01:41.444  + uname -a
00:01:41.444  Linux spdk-wfp-21 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux
00:01:41.444  + sudo /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status
00:01:44.740  Hugepages
00:01:44.740  node     hugesize     free /  total
00:01:44.740  node0   1048576kB        0 /      0
00:01:44.740  node0      2048kB        0 /      0
00:01:44.740  node1   1048576kB        0 /      0
00:01:44.740  node1      2048kB        0 /      0
00:01:44.740  
00:01:44.740  Type                      BDF             Vendor Device NUMA    Driver           Device     Block devices
00:01:44.740  I/OAT                     0000:00:04.0    8086   2021   0       ioatdma          -          -
00:01:44.740  I/OAT                     0000:00:04.1    8086   2021   0       ioatdma          -          -
00:01:44.740  I/OAT                     0000:00:04.2    8086   2021   0       ioatdma          -          -
00:01:44.740  I/OAT                     0000:00:04.3    8086   2021   0       ioatdma          -          -
00:01:44.740  I/OAT                     0000:00:04.4    8086   2021   0       ioatdma          -          -
00:01:44.740  I/OAT                     0000:00:04.5    8086   2021   0       ioatdma          -          -
00:01:44.740  I/OAT                     0000:00:04.6    8086   2021   0       ioatdma          -          -
00:01:44.740  I/OAT                     0000:00:04.7    8086   2021   0       ioatdma          -          -
00:01:44.740  I/OAT                     0000:80:04.0    8086   2021   1       ioatdma          -          -
00:01:44.740  I/OAT                     0000:80:04.1    8086   2021   1       ioatdma          -          -
00:01:44.740  I/OAT                     0000:80:04.2    8086   2021   1       ioatdma          -          -
00:01:44.740  I/OAT                     0000:80:04.3    8086   2021   1       ioatdma          -          -
00:01:44.740  I/OAT                     0000:80:04.4    8086   2021   1       ioatdma          -          -
00:01:44.740  I/OAT                     0000:80:04.5    8086   2021   1       ioatdma          -          -
00:01:44.740  I/OAT                     0000:80:04.6    8086   2021   1       ioatdma          -          -
00:01:44.740  I/OAT                     0000:80:04.7    8086   2021   1       ioatdma          -          -
00:01:44.740  NVMe                      0000:d8:00.0    8086   0a54   1       nvme             nvme0      nvme0n1
00:01:44.740  + rm -f /tmp/spdk-ld-path
00:01:44.740  + source autorun-spdk.conf
00:01:44.740  ++ SPDK_RUN_FUNCTIONAL_TEST=1
00:01:44.740  ++ SPDK_TEST_NVMF=1
00:01:44.740  ++ SPDK_TEST_NVME_CLI=1
00:01:44.740  ++ SPDK_TEST_NVMF_NICS=mlx5
00:01:44.740  ++ SPDK_RUN_ASAN=1
00:01:44.740  ++ SPDK_RUN_UBSAN=1
00:01:44.740  ++ NET_TYPE=phy
00:01:44.740  ++ RUN_NIGHTLY=1
00:01:44.740  + ((  SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1  ))
00:01:44.740  + [[ -n '' ]]
00:01:44.740  + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-phy-autotest/spdk
00:01:44.740  + for M in /var/spdk/build-*-manifest.txt
00:01:44.740  + [[ -f /var/spdk/build-kernel-manifest.txt ]]
00:01:44.740  + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/
00:01:44.740  + for M in /var/spdk/build-*-manifest.txt
00:01:44.740  + [[ -f /var/spdk/build-pkg-manifest.txt ]]
00:01:44.740  + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/
00:01:44.740  + for M in /var/spdk/build-*-manifest.txt
00:01:44.740  + [[ -f /var/spdk/build-repo-manifest.txt ]]
00:01:44.740  + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/
00:01:44.740  ++ uname
00:01:44.740  + [[ Linux == \L\i\n\u\x ]]
00:01:44.740  + sudo dmesg -T
00:01:44.740  + sudo dmesg --clear
00:01:44.740  + dmesg_pid=3025300
00:01:44.740  + [[ Fedora Linux == FreeBSD ]]
00:01:44.740  + export UNBIND_ENTIRE_IOMMU_GROUP=yes
00:01:44.740  + UNBIND_ENTIRE_IOMMU_GROUP=yes
00:01:44.740  + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:01:44.740  + [[ -x /usr/src/fio-static/fio ]]
00:01:44.740  + export FIO_BIN=/usr/src/fio-static/fio
00:01:44.740  + FIO_BIN=/usr/src/fio-static/fio
00:01:44.740  + sudo dmesg -Tw
00:01:44.740  + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]]
00:01:44.740  + [[ ! -v VFIO_QEMU_BIN ]]
00:01:44.740  + [[ -e /usr/local/qemu/vfio-user-latest ]]
00:01:44.740  + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:01:44.740  + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:01:44.740  + [[ -e /usr/local/qemu/vanilla-latest ]]
00:01:44.740  + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:01:44.740  + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:01:44.740  + spdk/autorun.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf
00:01:44.740    13:27:44  -- common/autotest_common.sh@1710 -- $ [[ n == y ]]
00:01:44.740   13:27:44  -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf
00:01:44.740    13:27:44  -- nvmf-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1
00:01:44.740    13:27:44  -- nvmf-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1
00:01:44.740    13:27:44  -- nvmf-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1
00:01:44.740    13:27:44  -- nvmf-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_NICS=mlx5
00:01:44.740    13:27:44  -- nvmf-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1
00:01:44.740    13:27:44  -- nvmf-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1
00:01:44.740    13:27:44  -- nvmf-phy-autotest/autorun-spdk.conf@7 -- $ NET_TYPE=phy
00:01:44.740    13:27:44  -- nvmf-phy-autotest/autorun-spdk.conf@8 -- $ RUN_NIGHTLY=1
00:01:44.740   13:27:44  -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT
00:01:44.740   13:27:44  -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf
00:01:45.000     13:27:44  -- common/autotest_common.sh@1710 -- $ [[ n == y ]]
00:01:45.000    13:27:44  -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh
00:01:45.000     13:27:44  -- scripts/common.sh@15 -- $ shopt -s extglob
00:01:45.000     13:27:44  -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]]
00:01:45.000     13:27:44  -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:01:45.000     13:27:44  -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh
00:01:45.000      13:27:44  -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:01:45.000      13:27:44  -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:01:45.000      13:27:44  -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:01:45.000      13:27:44  -- paths/export.sh@5 -- $ export PATH
00:01:45.000      13:27:44  -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:01:45.000    13:27:44  -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output
00:01:45.000      13:27:44  -- common/autobuild_common.sh@493 -- $ date +%s
00:01:45.000     13:27:44  -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1734179264.XXXXXX
00:01:45.000    13:27:44  -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1734179264.bxwrKN
00:01:45.000    13:27:44  -- common/autobuild_common.sh@495 -- $ [[ -n '' ]]
00:01:45.000    13:27:44  -- common/autobuild_common.sh@499 -- $ '[' -n '' ']'
00:01:45.000    13:27:44  -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/'
00:01:45.000    13:27:44  -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp'
00:01:45.000    13:27:44  -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs'
00:01:45.000     13:27:44  -- common/autobuild_common.sh@509 -- $ get_config_params
00:01:45.000     13:27:44  -- common/autotest_common.sh@409 -- $ xtrace_disable
00:01:45.000     13:27:44  -- common/autotest_common.sh@10 -- $ set +x
00:01:45.000    13:27:44  -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk'
00:01:45.000    13:27:44  -- common/autobuild_common.sh@511 -- $ start_monitor_resources
00:01:45.000    13:27:44  -- pm/common@17 -- $ local monitor
00:01:45.000    13:27:44  -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:01:45.000    13:27:44  -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:01:45.000    13:27:44  -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:01:45.000     13:27:44  -- pm/common@21 -- $ date +%s
00:01:45.000    13:27:44  -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:01:45.000     13:27:44  -- pm/common@21 -- $ date +%s
00:01:45.000    13:27:44  -- pm/common@25 -- $ sleep 1
00:01:45.000     13:27:44  -- pm/common@21 -- $ date +%s
00:01:45.000     13:27:44  -- pm/common@21 -- $ date +%s
00:01:45.000    13:27:44  -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734179264
00:01:45.000    13:27:44  -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734179264
00:01:45.000    13:27:44  -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734179264
00:01:45.000    13:27:44  -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734179264
00:01:45.000  Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734179264_collect-cpu-temp.pm.log
00:01:45.000  Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734179264_collect-cpu-load.pm.log
00:01:45.000  Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734179264_collect-vmstat.pm.log
00:01:45.000  Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734179264_collect-bmc-pm.bmc.pm.log
00:01:45.941    13:27:45  -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT
00:01:45.941   13:27:45  -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD=
00:01:45.941   13:27:45  -- spdk/autobuild.sh@12 -- $ umask 022
00:01:45.941   13:27:45  -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk
00:01:45.941   13:27:45  -- spdk/autobuild.sh@16 -- $ date -u
00:01:45.941  Sat Dec 14 12:27:45 PM UTC 2024
00:01:45.941   13:27:45  -- spdk/autobuild.sh@17 -- $ git describe --tags
00:01:45.941  v25.01-rc1-2-ge01cb43b8
00:01:45.941   13:27:45  -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']'
00:01:45.941   13:27:45  -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan'
00:01:45.941   13:27:45  -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']'
00:01:45.941   13:27:45  -- common/autotest_common.sh@1111 -- $ xtrace_disable
00:01:45.941   13:27:45  -- common/autotest_common.sh@10 -- $ set +x
00:01:45.941  ************************************
00:01:45.941  START TEST asan
00:01:45.941  ************************************
00:01:45.941   13:27:45 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan'
00:01:45.941  using asan
00:01:45.941  
00:01:45.941  real	0m0.001s
00:01:45.941  user	0m0.000s
00:01:45.941  sys	0m0.000s
00:01:45.941   13:27:45 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable
00:01:45.941   13:27:45 asan -- common/autotest_common.sh@10 -- $ set +x
00:01:45.941  ************************************
00:01:45.941  END TEST asan
00:01:45.941  ************************************
00:01:45.941   13:27:45  -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']'
00:01:45.941   13:27:45  -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan'
00:01:45.941   13:27:45  -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']'
00:01:45.941   13:27:45  -- common/autotest_common.sh@1111 -- $ xtrace_disable
00:01:45.941   13:27:45  -- common/autotest_common.sh@10 -- $ set +x
00:01:45.941  ************************************
00:01:45.941  START TEST ubsan
00:01:45.941  ************************************
00:01:45.941   13:27:45 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan'
00:01:45.941  using ubsan
00:01:45.941  
00:01:45.941  real	0m0.000s
00:01:45.941  user	0m0.000s
00:01:45.941  sys	0m0.000s
00:01:45.941   13:27:45 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable
00:01:45.941   13:27:45 ubsan -- common/autotest_common.sh@10 -- $ set +x
00:01:45.941  ************************************
00:01:45.941  END TEST ubsan
00:01:45.941  ************************************
00:01:46.199   13:27:45  -- spdk/autobuild.sh@27 -- $ '[' -n '' ']'
00:01:46.199   13:27:45  -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in
00:01:46.199   13:27:45  -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]]
00:01:46.199   13:27:45  -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]]
00:01:46.199   13:27:45  -- spdk/autobuild.sh@55 -- $ [[ -n '' ]]
00:01:46.199   13:27:45  -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]]
00:01:46.199   13:27:45  -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]]
00:01:46.199   13:27:45  -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]]
00:01:46.200   13:27:45  -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-shared
00:01:46.200  Using default SPDK env in /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk
00:01:46.200  Using default DPDK in /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build
00:01:46.459  Using 'verbs' RDMA provider
00:02:02.280  Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal.log)...done.
00:02:14.496  Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal-crypto.log)...done.
00:02:14.496  Creating mk/config.mk...done.
00:02:14.496  Creating mk/cc.flags.mk...done.
00:02:14.496  Type 'make' to build.
00:02:14.496   13:28:13  -- spdk/autobuild.sh@70 -- $ run_test make make -j112
00:02:14.496   13:28:13  -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']'
00:02:14.496   13:28:13  -- common/autotest_common.sh@1111 -- $ xtrace_disable
00:02:14.496   13:28:13  -- common/autotest_common.sh@10 -- $ set +x
00:02:14.496  ************************************
00:02:14.496  START TEST make
00:02:14.496  ************************************
00:02:14.496   13:28:13 make -- common/autotest_common.sh@1129 -- $ make -j112
00:02:24.587  The Meson build system
00:02:24.587  Version: 1.5.0
00:02:24.587  Source dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk
00:02:24.587  Build dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp
00:02:24.587  Build type: native build
00:02:24.587  Program cat found: YES (/usr/bin/cat)
00:02:24.587  Project name: DPDK
00:02:24.587  Project version: 24.03.0
00:02:24.587  C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)")
00:02:24.587  C linker for the host machine: cc ld.bfd 2.40-14
00:02:24.587  Host machine cpu family: x86_64
00:02:24.587  Host machine cpu: x86_64
00:02:24.587  Message: ## Building in Developer Mode ##
00:02:24.587  Program pkg-config found: YES (/usr/bin/pkg-config)
00:02:24.587  Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh)
00:02:24.587  Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh)
00:02:24.587  Program python3 found: YES (/usr/bin/python3)
00:02:24.587  Program cat found: YES (/usr/bin/cat)
00:02:24.587  Compiler for C supports arguments -march=native: YES 
00:02:24.587  Checking for size of "void *" : 8 
00:02:24.587  Checking for size of "void *" : 8 (cached)
00:02:24.587  Compiler for C supports link arguments -Wl,--undefined-version: YES 
00:02:24.587  Library m found: YES
00:02:24.587  Library numa found: YES
00:02:24.587  Has header "numaif.h" : YES 
00:02:24.587  Library fdt found: NO
00:02:24.587  Library execinfo found: NO
00:02:24.587  Has header "execinfo.h" : YES 
00:02:24.587  Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5
00:02:24.587  Run-time dependency libarchive found: NO (tried pkgconfig)
00:02:24.587  Run-time dependency libbsd found: NO (tried pkgconfig)
00:02:24.587  Run-time dependency jansson found: NO (tried pkgconfig)
00:02:24.587  Run-time dependency openssl found: YES 3.1.1
00:02:24.587  Run-time dependency libpcap found: YES 1.10.4
00:02:24.587  Has header "pcap.h" with dependency libpcap: YES 
00:02:24.587  Compiler for C supports arguments -Wcast-qual: YES 
00:02:24.587  Compiler for C supports arguments -Wdeprecated: YES 
00:02:24.587  Compiler for C supports arguments -Wformat: YES 
00:02:24.587  Compiler for C supports arguments -Wformat-nonliteral: NO 
00:02:24.587  Compiler for C supports arguments -Wformat-security: NO 
00:02:24.587  Compiler for C supports arguments -Wmissing-declarations: YES 
00:02:24.587  Compiler for C supports arguments -Wmissing-prototypes: YES 
00:02:24.587  Compiler for C supports arguments -Wnested-externs: YES 
00:02:24.587  Compiler for C supports arguments -Wold-style-definition: YES 
00:02:24.587  Compiler for C supports arguments -Wpointer-arith: YES 
00:02:24.587  Compiler for C supports arguments -Wsign-compare: YES 
00:02:24.587  Compiler for C supports arguments -Wstrict-prototypes: YES 
00:02:24.587  Compiler for C supports arguments -Wundef: YES 
00:02:24.587  Compiler for C supports arguments -Wwrite-strings: YES 
00:02:24.587  Compiler for C supports arguments -Wno-address-of-packed-member: YES 
00:02:24.587  Compiler for C supports arguments -Wno-packed-not-aligned: YES 
00:02:24.587  Compiler for C supports arguments -Wno-missing-field-initializers: YES 
00:02:24.587  Compiler for C supports arguments -Wno-zero-length-bounds: YES 
00:02:24.587  Program objdump found: YES (/usr/bin/objdump)
00:02:24.587  Compiler for C supports arguments -mavx512f: YES 
00:02:24.587  Checking if "AVX512 checking" compiles: YES 
00:02:24.587  Fetching value of define "__SSE4_2__" : 1 
00:02:24.587  Fetching value of define "__AES__" : 1 
00:02:24.587  Fetching value of define "__AVX__" : 1 
00:02:24.587  Fetching value of define "__AVX2__" : 1 
00:02:24.587  Fetching value of define "__AVX512BW__" : 1 
00:02:24.587  Fetching value of define "__AVX512CD__" : 1 
00:02:24.587  Fetching value of define "__AVX512DQ__" : 1 
00:02:24.587  Fetching value of define "__AVX512F__" : 1 
00:02:24.587  Fetching value of define "__AVX512VL__" : 1 
00:02:24.587  Fetching value of define "__PCLMUL__" : 1 
00:02:24.587  Fetching value of define "__RDRND__" : 1 
00:02:24.587  Fetching value of define "__RDSEED__" : 1 
00:02:24.587  Fetching value of define "__VPCLMULQDQ__" : (undefined) 
00:02:24.587  Fetching value of define "__znver1__" : (undefined) 
00:02:24.587  Fetching value of define "__znver2__" : (undefined) 
00:02:24.587  Fetching value of define "__znver3__" : (undefined) 
00:02:24.587  Fetching value of define "__znver4__" : (undefined) 
00:02:24.587  Library asan found: YES
00:02:24.587  Compiler for C supports arguments -Wno-format-truncation: YES 
00:02:24.587  Message: lib/log: Defining dependency "log"
00:02:24.587  Message: lib/kvargs: Defining dependency "kvargs"
00:02:24.587  Message: lib/telemetry: Defining dependency "telemetry"
00:02:24.587  Library rt found: YES
00:02:24.587  Checking for function "getentropy" : NO 
00:02:24.587  Message: lib/eal: Defining dependency "eal"
00:02:24.587  Message: lib/ring: Defining dependency "ring"
00:02:24.587  Message: lib/rcu: Defining dependency "rcu"
00:02:24.587  Message: lib/mempool: Defining dependency "mempool"
00:02:24.587  Message: lib/mbuf: Defining dependency "mbuf"
00:02:24.587  Fetching value of define "__PCLMUL__" : 1 (cached)
00:02:24.587  Fetching value of define "__AVX512F__" : 1 (cached)
00:02:24.587  Fetching value of define "__AVX512BW__" : 1 (cached)
00:02:24.587  Fetching value of define "__AVX512DQ__" : 1 (cached)
00:02:24.587  Fetching value of define "__AVX512VL__" : 1 (cached)
00:02:24.587  Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached)
00:02:24.587  Compiler for C supports arguments -mpclmul: YES 
00:02:24.587  Compiler for C supports arguments -maes: YES 
00:02:24.587  Compiler for C supports arguments -mavx512f: YES (cached)
00:02:24.587  Compiler for C supports arguments -mavx512bw: YES 
00:02:24.587  Compiler for C supports arguments -mavx512dq: YES 
00:02:24.587  Compiler for C supports arguments -mavx512vl: YES 
00:02:24.587  Compiler for C supports arguments -mvpclmulqdq: YES 
00:02:24.587  Compiler for C supports arguments -mavx2: YES 
00:02:24.587  Compiler for C supports arguments -mavx: YES 
00:02:24.587  Message: lib/net: Defining dependency "net"
00:02:24.587  Message: lib/meter: Defining dependency "meter"
00:02:24.587  Message: lib/ethdev: Defining dependency "ethdev"
00:02:24.587  Message: lib/pci: Defining dependency "pci"
00:02:24.587  Message: lib/cmdline: Defining dependency "cmdline"
00:02:24.587  Message: lib/hash: Defining dependency "hash"
00:02:24.587  Message: lib/timer: Defining dependency "timer"
00:02:24.587  Message: lib/compressdev: Defining dependency "compressdev"
00:02:24.587  Message: lib/cryptodev: Defining dependency "cryptodev"
00:02:24.587  Message: lib/dmadev: Defining dependency "dmadev"
00:02:24.587  Compiler for C supports arguments -Wno-cast-qual: YES 
00:02:24.587  Message: lib/power: Defining dependency "power"
00:02:24.587  Message: lib/reorder: Defining dependency "reorder"
00:02:24.587  Message: lib/security: Defining dependency "security"
00:02:24.587  Has header "linux/userfaultfd.h" : YES 
00:02:24.587  Has header "linux/vduse.h" : YES 
00:02:24.587  Message: lib/vhost: Defining dependency "vhost"
00:02:24.587  Compiler for C supports arguments -Wno-format-truncation: YES (cached)
00:02:24.587  Message: drivers/bus/pci: Defining dependency "bus_pci"
00:02:24.587  Message: drivers/bus/vdev: Defining dependency "bus_vdev"
00:02:24.587  Message: drivers/mempool/ring: Defining dependency "mempool_ring"
00:02:24.587  Message: Disabling raw/* drivers: missing internal dependency "rawdev"
00:02:24.587  Message: Disabling regex/* drivers: missing internal dependency "regexdev"
00:02:24.587  Message: Disabling ml/* drivers: missing internal dependency "mldev"
00:02:24.588  Message: Disabling event/* drivers: missing internal dependency "eventdev"
00:02:24.588  Message: Disabling baseband/* drivers: missing internal dependency "bbdev"
00:02:24.588  Message: Disabling gpu/* drivers: missing internal dependency "gpudev"
00:02:24.588  Program doxygen found: YES (/usr/local/bin/doxygen)
00:02:24.588  Configuring doxy-api-html.conf using configuration
00:02:24.588  Configuring doxy-api-man.conf using configuration
00:02:24.588  Program mandb found: YES (/usr/bin/mandb)
00:02:24.588  Program sphinx-build found: NO
00:02:24.588  Configuring rte_build_config.h using configuration
00:02:24.588  Message: 
00:02:24.588  =================
00:02:24.588  Applications Enabled
00:02:24.588  =================
00:02:24.588  
00:02:24.588  apps:
00:02:24.588  	
00:02:24.588  
00:02:24.588  Message: 
00:02:24.588  =================
00:02:24.588  Libraries Enabled
00:02:24.588  =================
00:02:24.588  
00:02:24.588  libs:
00:02:24.588  	log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 
00:02:24.588  	net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 
00:02:24.588  	cryptodev, dmadev, power, reorder, security, vhost, 
00:02:24.588  
00:02:24.588  Message: 
00:02:24.588  ===============
00:02:24.588  Drivers Enabled
00:02:24.588  ===============
00:02:24.588  
00:02:24.588  common:
00:02:24.588  	
00:02:24.588  bus:
00:02:24.588  	pci, vdev, 
00:02:24.588  mempool:
00:02:24.588  	ring, 
00:02:24.588  dma:
00:02:24.588  	
00:02:24.588  net:
00:02:24.588  	
00:02:24.588  crypto:
00:02:24.588  	
00:02:24.588  compress:
00:02:24.588  	
00:02:24.588  vdpa:
00:02:24.588  	
00:02:24.588  
00:02:24.588  Message: 
00:02:24.588  =================
00:02:24.588  Content Skipped
00:02:24.588  =================
00:02:24.588  
00:02:24.588  apps:
00:02:24.588  	dumpcap:	explicitly disabled via build config
00:02:24.588  	graph:	explicitly disabled via build config
00:02:24.588  	pdump:	explicitly disabled via build config
00:02:24.588  	proc-info:	explicitly disabled via build config
00:02:24.588  	test-acl:	explicitly disabled via build config
00:02:24.588  	test-bbdev:	explicitly disabled via build config
00:02:24.588  	test-cmdline:	explicitly disabled via build config
00:02:24.588  	test-compress-perf:	explicitly disabled via build config
00:02:24.588  	test-crypto-perf:	explicitly disabled via build config
00:02:24.588  	test-dma-perf:	explicitly disabled via build config
00:02:24.588  	test-eventdev:	explicitly disabled via build config
00:02:24.588  	test-fib:	explicitly disabled via build config
00:02:24.588  	test-flow-perf:	explicitly disabled via build config
00:02:24.588  	test-gpudev:	explicitly disabled via build config
00:02:24.588  	test-mldev:	explicitly disabled via build config
00:02:24.588  	test-pipeline:	explicitly disabled via build config
00:02:24.588  	test-pmd:	explicitly disabled via build config
00:02:24.588  	test-regex:	explicitly disabled via build config
00:02:24.588  	test-sad:	explicitly disabled via build config
00:02:24.588  	test-security-perf:	explicitly disabled via build config
00:02:24.588  	
00:02:24.588  libs:
00:02:24.588  	argparse:	explicitly disabled via build config
00:02:24.588  	metrics:	explicitly disabled via build config
00:02:24.588  	acl:	explicitly disabled via build config
00:02:24.588  	bbdev:	explicitly disabled via build config
00:02:24.588  	bitratestats:	explicitly disabled via build config
00:02:24.588  	bpf:	explicitly disabled via build config
00:02:24.588  	cfgfile:	explicitly disabled via build config
00:02:24.588  	distributor:	explicitly disabled via build config
00:02:24.588  	efd:	explicitly disabled via build config
00:02:24.588  	eventdev:	explicitly disabled via build config
00:02:24.588  	dispatcher:	explicitly disabled via build config
00:02:24.588  	gpudev:	explicitly disabled via build config
00:02:24.588  	gro:	explicitly disabled via build config
00:02:24.588  	gso:	explicitly disabled via build config
00:02:24.588  	ip_frag:	explicitly disabled via build config
00:02:24.588  	jobstats:	explicitly disabled via build config
00:02:24.588  	latencystats:	explicitly disabled via build config
00:02:24.588  	lpm:	explicitly disabled via build config
00:02:24.588  	member:	explicitly disabled via build config
00:02:24.588  	pcapng:	explicitly disabled via build config
00:02:24.588  	rawdev:	explicitly disabled via build config
00:02:24.588  	regexdev:	explicitly disabled via build config
00:02:24.588  	mldev:	explicitly disabled via build config
00:02:24.588  	rib:	explicitly disabled via build config
00:02:24.588  	sched:	explicitly disabled via build config
00:02:24.588  	stack:	explicitly disabled via build config
00:02:24.588  	ipsec:	explicitly disabled via build config
00:02:24.588  	pdcp:	explicitly disabled via build config
00:02:24.588  	fib:	explicitly disabled via build config
00:02:24.588  	port:	explicitly disabled via build config
00:02:24.588  	pdump:	explicitly disabled via build config
00:02:24.588  	table:	explicitly disabled via build config
00:02:24.588  	pipeline:	explicitly disabled via build config
00:02:24.588  	graph:	explicitly disabled via build config
00:02:24.588  	node:	explicitly disabled via build config
00:02:24.588  	
00:02:24.588  drivers:
00:02:24.588  	common/cpt:	not in enabled drivers build config
00:02:24.588  	common/dpaax:	not in enabled drivers build config
00:02:24.588  	common/iavf:	not in enabled drivers build config
00:02:24.588  	common/idpf:	not in enabled drivers build config
00:02:24.588  	common/ionic:	not in enabled drivers build config
00:02:24.588  	common/mvep:	not in enabled drivers build config
00:02:24.588  	common/octeontx:	not in enabled drivers build config
00:02:24.588  	bus/auxiliary:	not in enabled drivers build config
00:02:24.588  	bus/cdx:	not in enabled drivers build config
00:02:24.588  	bus/dpaa:	not in enabled drivers build config
00:02:24.588  	bus/fslmc:	not in enabled drivers build config
00:02:24.588  	bus/ifpga:	not in enabled drivers build config
00:02:24.588  	bus/platform:	not in enabled drivers build config
00:02:24.588  	bus/uacce:	not in enabled drivers build config
00:02:24.588  	bus/vmbus:	not in enabled drivers build config
00:02:24.588  	common/cnxk:	not in enabled drivers build config
00:02:24.588  	common/mlx5:	not in enabled drivers build config
00:02:24.588  	common/nfp:	not in enabled drivers build config
00:02:24.588  	common/nitrox:	not in enabled drivers build config
00:02:24.588  	common/qat:	not in enabled drivers build config
00:02:24.588  	common/sfc_efx:	not in enabled drivers build config
00:02:24.588  	mempool/bucket:	not in enabled drivers build config
00:02:24.588  	mempool/cnxk:	not in enabled drivers build config
00:02:24.588  	mempool/dpaa:	not in enabled drivers build config
00:02:24.588  	mempool/dpaa2:	not in enabled drivers build config
00:02:24.588  	mempool/octeontx:	not in enabled drivers build config
00:02:24.588  	mempool/stack:	not in enabled drivers build config
00:02:24.588  	dma/cnxk:	not in enabled drivers build config
00:02:24.588  	dma/dpaa:	not in enabled drivers build config
00:02:24.588  	dma/dpaa2:	not in enabled drivers build config
00:02:24.588  	dma/hisilicon:	not in enabled drivers build config
00:02:24.588  	dma/idxd:	not in enabled drivers build config
00:02:24.588  	dma/ioat:	not in enabled drivers build config
00:02:24.588  	dma/skeleton:	not in enabled drivers build config
00:02:24.588  	net/af_packet:	not in enabled drivers build config
00:02:24.588  	net/af_xdp:	not in enabled drivers build config
00:02:24.588  	net/ark:	not in enabled drivers build config
00:02:24.588  	net/atlantic:	not in enabled drivers build config
00:02:24.588  	net/avp:	not in enabled drivers build config
00:02:24.588  	net/axgbe:	not in enabled drivers build config
00:02:24.588  	net/bnx2x:	not in enabled drivers build config
00:02:24.588  	net/bnxt:	not in enabled drivers build config
00:02:24.588  	net/bonding:	not in enabled drivers build config
00:02:24.588  	net/cnxk:	not in enabled drivers build config
00:02:24.588  	net/cpfl:	not in enabled drivers build config
00:02:24.588  	net/cxgbe:	not in enabled drivers build config
00:02:24.588  	net/dpaa:	not in enabled drivers build config
00:02:24.588  	net/dpaa2:	not in enabled drivers build config
00:02:24.588  	net/e1000:	not in enabled drivers build config
00:02:24.588  	net/ena:	not in enabled drivers build config
00:02:24.588  	net/enetc:	not in enabled drivers build config
00:02:24.588  	net/enetfec:	not in enabled drivers build config
00:02:24.588  	net/enic:	not in enabled drivers build config
00:02:24.588  	net/failsafe:	not in enabled drivers build config
00:02:24.588  	net/fm10k:	not in enabled drivers build config
00:02:24.588  	net/gve:	not in enabled drivers build config
00:02:24.588  	net/hinic:	not in enabled drivers build config
00:02:24.588  	net/hns3:	not in enabled drivers build config
00:02:24.588  	net/i40e:	not in enabled drivers build config
00:02:24.588  	net/iavf:	not in enabled drivers build config
00:02:24.588  	net/ice:	not in enabled drivers build config
00:02:24.588  	net/idpf:	not in enabled drivers build config
00:02:24.588  	net/igc:	not in enabled drivers build config
00:02:24.588  	net/ionic:	not in enabled drivers build config
00:02:24.588  	net/ipn3ke:	not in enabled drivers build config
00:02:24.588  	net/ixgbe:	not in enabled drivers build config
00:02:24.588  	net/mana:	not in enabled drivers build config
00:02:24.588  	net/memif:	not in enabled drivers build config
00:02:24.588  	net/mlx4:	not in enabled drivers build config
00:02:24.588  	net/mlx5:	not in enabled drivers build config
00:02:24.588  	net/mvneta:	not in enabled drivers build config
00:02:24.588  	net/mvpp2:	not in enabled drivers build config
00:02:24.588  	net/netvsc:	not in enabled drivers build config
00:02:24.588  	net/nfb:	not in enabled drivers build config
00:02:24.588  	net/nfp:	not in enabled drivers build config
00:02:24.588  	net/ngbe:	not in enabled drivers build config
00:02:24.588  	net/null:	not in enabled drivers build config
00:02:24.588  	net/octeontx:	not in enabled drivers build config
00:02:24.588  	net/octeon_ep:	not in enabled drivers build config
00:02:24.588  	net/pcap:	not in enabled drivers build config
00:02:24.588  	net/pfe:	not in enabled drivers build config
00:02:24.588  	net/qede:	not in enabled drivers build config
00:02:24.588  	net/ring:	not in enabled drivers build config
00:02:24.588  	net/sfc:	not in enabled drivers build config
00:02:24.588  	net/softnic:	not in enabled drivers build config
00:02:24.588  	net/tap:	not in enabled drivers build config
00:02:24.588  	net/thunderx:	not in enabled drivers build config
00:02:24.588  	net/txgbe:	not in enabled drivers build config
00:02:24.588  	net/vdev_netvsc:	not in enabled drivers build config
00:02:24.588  	net/vhost:	not in enabled drivers build config
00:02:24.588  	net/virtio:	not in enabled drivers build config
00:02:24.588  	net/vmxnet3:	not in enabled drivers build config
00:02:24.588  	raw/*:	missing internal dependency, "rawdev"
00:02:24.588  	crypto/armv8:	not in enabled drivers build config
00:02:24.589  	crypto/bcmfs:	not in enabled drivers build config
00:02:24.589  	crypto/caam_jr:	not in enabled drivers build config
00:02:24.589  	crypto/ccp:	not in enabled drivers build config
00:02:24.589  	crypto/cnxk:	not in enabled drivers build config
00:02:24.589  	crypto/dpaa_sec:	not in enabled drivers build config
00:02:24.589  	crypto/dpaa2_sec:	not in enabled drivers build config
00:02:24.589  	crypto/ipsec_mb:	not in enabled drivers build config
00:02:24.589  	crypto/mlx5:	not in enabled drivers build config
00:02:24.589  	crypto/mvsam:	not in enabled drivers build config
00:02:24.589  	crypto/nitrox:	not in enabled drivers build config
00:02:24.589  	crypto/null:	not in enabled drivers build config
00:02:24.589  	crypto/octeontx:	not in enabled drivers build config
00:02:24.589  	crypto/openssl:	not in enabled drivers build config
00:02:24.589  	crypto/scheduler:	not in enabled drivers build config
00:02:24.589  	crypto/uadk:	not in enabled drivers build config
00:02:24.589  	crypto/virtio:	not in enabled drivers build config
00:02:24.589  	compress/isal:	not in enabled drivers build config
00:02:24.589  	compress/mlx5:	not in enabled drivers build config
00:02:24.589  	compress/nitrox:	not in enabled drivers build config
00:02:24.589  	compress/octeontx:	not in enabled drivers build config
00:02:24.589  	compress/zlib:	not in enabled drivers build config
00:02:24.589  	regex/*:	missing internal dependency, "regexdev"
00:02:24.589  	ml/*:	missing internal dependency, "mldev"
00:02:24.589  	vdpa/ifc:	not in enabled drivers build config
00:02:24.589  	vdpa/mlx5:	not in enabled drivers build config
00:02:24.589  	vdpa/nfp:	not in enabled drivers build config
00:02:24.589  	vdpa/sfc:	not in enabled drivers build config
00:02:24.589  	event/*:	missing internal dependency, "eventdev"
00:02:24.589  	baseband/*:	missing internal dependency, "bbdev"
00:02:24.589  	gpu/*:	missing internal dependency, "gpudev"
00:02:24.589  	
00:02:24.589  
00:02:24.589  Build targets in project: 85
00:02:24.589  
00:02:24.589  DPDK 24.03.0
00:02:24.589  
00:02:24.589    User defined options
00:02:24.589      buildtype          : debug
00:02:24.589      default_library    : shared
00:02:24.589      libdir             : lib
00:02:24.589      prefix             : /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build
00:02:24.589      b_sanitize         : address
00:02:24.589      c_args             : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 
00:02:24.589      c_link_args        : 
00:02:24.589      cpu_instruction_set: native
00:02:24.589      disable_apps       : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf
00:02:24.589      disable_libs       : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro
00:02:24.589      enable_docs        : false
00:02:24.589      enable_drivers     : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm
00:02:24.589      enable_kmods       : false
00:02:24.589      max_lcores         : 128
00:02:24.589      tests              : false
00:02:24.589  
00:02:24.589  Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja
00:02:24.589  ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp'
00:02:24.589  [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o
00:02:24.589  [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o
00:02:24.589  [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o
00:02:24.589  [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o
00:02:24.589  [5/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o
00:02:24.589  [6/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o
00:02:24.589  [7/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o
00:02:24.589  [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o
00:02:24.589  [9/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o
00:02:24.589  [10/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o
00:02:24.589  [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o
00:02:24.589  [12/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o
00:02:24.589  [13/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o
00:02:24.589  [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o
00:02:24.589  [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o
00:02:24.589  [16/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o
00:02:24.589  [17/268] Linking static target lib/librte_kvargs.a
00:02:24.589  [18/268] Compiling C object lib/librte_log.a.p/log_log.c.o
00:02:24.589  [19/268] Linking static target lib/librte_log.a
00:02:24.589  [20/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o
00:02:24.589  [21/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o
00:02:24.589  [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o
00:02:24.589  [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o
00:02:24.589  [24/268] Linking static target lib/librte_pci.a
00:02:24.589  [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o
00:02:24.589  [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o
00:02:24.589  [27/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o
00:02:24.589  [28/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o
00:02:24.589  [29/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o
00:02:24.589  [30/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o
00:02:24.589  [31/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o
00:02:24.589  [32/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o
00:02:24.589  [33/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o
00:02:24.589  [34/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o
00:02:24.589  [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o
00:02:24.589  [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o
00:02:24.589  [37/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o
00:02:24.589  [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o
00:02:24.589  [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o
00:02:24.589  [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o
00:02:24.589  [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o
00:02:24.589  [42/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o
00:02:24.589  [43/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o
00:02:24.589  [44/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o
00:02:24.589  [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o
00:02:24.589  [46/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o
00:02:24.589  [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o
00:02:24.589  [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o
00:02:24.589  [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o
00:02:24.589  [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o
00:02:24.589  [51/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o
00:02:24.589  [52/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o
00:02:24.589  [53/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o
00:02:24.589  [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o
00:02:24.589  [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o
00:02:24.589  [56/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o
00:02:24.589  [57/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o
00:02:24.589  [58/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o
00:02:24.589  [59/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o
00:02:24.589  [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o
00:02:24.589  [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o
00:02:24.589  [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o
00:02:24.589  [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o
00:02:24.589  [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o
00:02:24.589  [65/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o
00:02:24.589  [66/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o
00:02:24.589  [67/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o
00:02:24.589  [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o
00:02:24.589  [69/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o
00:02:24.589  [70/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output)
00:02:24.589  [71/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o
00:02:24.589  [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o
00:02:24.589  [73/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o
00:02:24.589  [74/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o
00:02:24.589  [75/268] Linking static target lib/librte_meter.a
00:02:24.589  [76/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o
00:02:24.589  [77/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o
00:02:24.589  [78/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o
00:02:24.589  [79/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o
00:02:24.589  [80/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o
00:02:24.589  [81/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output)
00:02:24.589  [82/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o
00:02:24.589  [83/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o
00:02:24.589  [84/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o
00:02:24.589  [85/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o
00:02:24.589  [86/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o
00:02:24.589  [87/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o
00:02:24.589  [88/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o
00:02:24.589  [89/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o
00:02:24.589  [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o
00:02:24.589  [91/268] Linking static target lib/librte_ring.a
00:02:24.589  [92/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o
00:02:24.589  [93/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o
00:02:24.589  [94/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o
00:02:24.589  [95/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o
00:02:24.589  [96/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o
00:02:24.589  [97/268] Linking static target lib/librte_telemetry.a
00:02:24.589  [98/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o
00:02:24.589  [99/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o
00:02:24.589  [100/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o
00:02:24.590  [101/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o
00:02:24.590  [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o
00:02:24.590  [103/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o
00:02:24.590  [104/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o
00:02:24.590  [105/268] Linking static target lib/librte_cmdline.a
00:02:24.590  [106/268] Linking static target lib/net/libnet_crc_avx512_lib.a
00:02:24.590  [107/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o
00:02:24.590  [108/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o
00:02:24.848  [109/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o
00:02:24.848  [110/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o
00:02:24.848  [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o
00:02:24.848  [112/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o
00:02:24.848  [113/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o
00:02:24.848  [114/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o
00:02:24.848  [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o
00:02:24.848  [116/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o
00:02:24.848  [117/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o
00:02:24.848  [118/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o
00:02:24.848  [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o
00:02:24.848  [120/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o
00:02:24.848  [121/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o
00:02:24.848  [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o
00:02:24.848  [123/268] Linking static target lib/librte_timer.a
00:02:24.848  [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o
00:02:24.848  [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o
00:02:24.848  [126/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o
00:02:24.848  [127/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o
00:02:24.848  [128/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o
00:02:24.848  [129/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o
00:02:24.848  [130/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o
00:02:24.848  [131/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o
00:02:24.848  [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o
00:02:24.848  [133/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o
00:02:24.848  [134/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o
00:02:24.848  [135/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o
00:02:24.848  [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o
00:02:24.848  [137/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o
00:02:24.848  [138/268] Linking static target lib/librte_compressdev.a
00:02:24.848  [139/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o
00:02:24.848  [140/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output)
00:02:24.848  [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o
00:02:24.848  [142/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o
00:02:24.848  [143/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o
00:02:24.848  [144/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o
00:02:24.848  [145/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o
00:02:24.848  [146/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output)
00:02:24.848  [147/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o
00:02:24.848  [148/268] Linking static target lib/librte_net.a
00:02:25.106  [149/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o
00:02:25.106  [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o
00:02:25.106  [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o
00:02:25.106  [152/268] Linking static target lib/librte_mempool.a
00:02:25.106  [153/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o
00:02:25.106  [154/268] Linking target lib/librte_log.so.24.1
00:02:25.106  [155/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o
00:02:25.106  [156/268] Linking static target lib/librte_dmadev.a
00:02:25.106  [157/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o
00:02:25.106  [158/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o
00:02:25.106  [159/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o
00:02:25.106  [160/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output)
00:02:25.106  [161/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o
00:02:25.106  [162/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o
00:02:25.106  [163/268] Linking static target lib/librte_eal.a
00:02:25.106  [164/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o
00:02:25.106  [165/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o
00:02:25.106  [166/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o
00:02:25.106  [167/268] Linking static target drivers/libtmp_rte_bus_vdev.a
00:02:25.106  [168/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o
00:02:25.106  [169/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o
00:02:25.106  [170/268] Linking static target drivers/libtmp_rte_bus_pci.a
00:02:25.106  [171/268] Linking static target lib/librte_rcu.a
00:02:25.106  [172/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o
00:02:25.106  [173/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols
00:02:25.106  [174/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o
00:02:25.106  [175/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o
00:02:25.106  [176/268] Linking static target lib/librte_reorder.a
00:02:25.106  [177/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o
00:02:25.106  [178/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o
00:02:25.106  [179/268] Linking static target lib/librte_power.a
00:02:25.106  [180/268] Linking target lib/librte_kvargs.so.24.1
00:02:25.106  [181/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o
00:02:25.106  [182/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o
00:02:25.106  [183/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output)
00:02:25.106  [184/268] Linking static target drivers/libtmp_rte_mempool_ring.a
00:02:25.364  [185/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o
00:02:25.364  [186/268] Linking static target lib/librte_security.a
00:02:25.364  [187/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output)
00:02:25.364  [188/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command
00:02:25.364  [189/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output)
00:02:25.364  [190/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o
00:02:25.364  [191/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o
00:02:25.364  [192/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols
00:02:25.364  [193/268] Linking target lib/librte_telemetry.so.24.1
00:02:25.364  [194/268] Generating drivers/rte_bus_pci.pmd.c with a custom command
00:02:25.364  [195/268] Linking static target drivers/librte_bus_vdev.a
00:02:25.364  [196/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o
00:02:25.364  [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o
00:02:25.364  [198/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o
00:02:25.364  [199/268] Linking static target lib/librte_hash.a
00:02:25.364  [200/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o
00:02:25.364  [201/268] Linking static target lib/librte_mbuf.a
00:02:25.364  [202/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o
00:02:25.364  [203/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command
00:02:25.364  [204/268] Linking static target drivers/librte_bus_pci.a
00:02:25.364  [205/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o
00:02:25.364  [206/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o
00:02:25.364  [207/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o
00:02:25.364  [208/268] Linking static target drivers/librte_mempool_ring.a
00:02:25.621  [209/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols
00:02:25.622  [210/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output)
00:02:25.622  [211/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output)
00:02:25.622  [212/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output)
00:02:25.622  [213/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output)
00:02:25.879  [214/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output)
00:02:25.879  [215/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o
00:02:25.879  [216/268] Linking static target lib/librte_cryptodev.a
00:02:25.879  [217/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o
00:02:25.879  [218/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output)
00:02:25.879  [219/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output)
00:02:25.880  [220/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output)
00:02:26.138  [221/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output)
00:02:26.396  [222/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output)
00:02:26.396  [223/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output)
00:02:26.396  [224/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o
00:02:26.396  [225/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output)
00:02:26.396  [226/268] Linking static target lib/librte_ethdev.a
00:02:27.772  [227/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o
00:02:28.030  [228/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output)
00:02:30.556  [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o
00:02:30.556  [230/268] Linking static target lib/librte_vhost.a
00:02:32.452  [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output)
00:02:36.634  [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output)
00:02:37.568  [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output)
00:02:37.568  [234/268] Linking target lib/librte_eal.so.24.1
00:02:37.568  [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols
00:02:37.826  [236/268] Linking target lib/librte_meter.so.24.1
00:02:37.826  [237/268] Linking target lib/librte_ring.so.24.1
00:02:37.826  [238/268] Linking target lib/librte_timer.so.24.1
00:02:37.826  [239/268] Linking target lib/librte_pci.so.24.1
00:02:37.826  [240/268] Linking target drivers/librte_bus_vdev.so.24.1
00:02:37.826  [241/268] Linking target lib/librte_dmadev.so.24.1
00:02:37.826  [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols
00:02:37.826  [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols
00:02:37.826  [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols
00:02:37.826  [245/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols
00:02:37.826  [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols
00:02:37.826  [247/268] Linking target drivers/librte_bus_pci.so.24.1
00:02:37.826  [248/268] Linking target lib/librte_mempool.so.24.1
00:02:37.826  [249/268] Linking target lib/librte_rcu.so.24.1
00:02:38.084  [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols
00:02:38.084  [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols
00:02:38.085  [252/268] Linking target drivers/librte_mempool_ring.so.24.1
00:02:38.085  [253/268] Linking target lib/librte_mbuf.so.24.1
00:02:38.343  [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols
00:02:38.343  [255/268] Linking target lib/librte_compressdev.so.24.1
00:02:38.343  [256/268] Linking target lib/librte_reorder.so.24.1
00:02:38.343  [257/268] Linking target lib/librte_net.so.24.1
00:02:38.343  [258/268] Linking target lib/librte_cryptodev.so.24.1
00:02:38.343  [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols
00:02:38.343  [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols
00:02:38.343  [261/268] Linking target lib/librte_cmdline.so.24.1
00:02:38.601  [262/268] Linking target lib/librte_hash.so.24.1
00:02:38.601  [263/268] Linking target lib/librte_ethdev.so.24.1
00:02:38.601  [264/268] Linking target lib/librte_security.so.24.1
00:02:38.601  [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols
00:02:38.601  [266/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols
00:02:38.601  [267/268] Linking target lib/librte_power.so.24.1
00:02:38.601  [268/268] Linking target lib/librte_vhost.so.24.1
00:02:38.601  INFO: autodetecting backend as ninja
00:02:38.601  INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp -j 112
00:02:45.159    CC lib/ut/ut.o
00:02:45.159    CC lib/ut_mock/mock.o
00:02:45.159    CC lib/log/log.o
00:02:45.159    CC lib/log/log_deprecated.o
00:02:45.159    CC lib/log/log_flags.o
00:02:45.159    LIB libspdk_ut.a
00:02:45.159    SO libspdk_ut.so.2.0
00:02:45.159    LIB libspdk_ut_mock.a
00:02:45.159    LIB libspdk_log.a
00:02:45.159    SO libspdk_ut_mock.so.6.0
00:02:45.159    SYMLINK libspdk_ut.so
00:02:45.159    SO libspdk_log.so.7.1
00:02:45.159    SYMLINK libspdk_ut_mock.so
00:02:45.159    SYMLINK libspdk_log.so
00:02:45.159    CC lib/ioat/ioat.o
00:02:45.416    CC lib/util/base64.o
00:02:45.416    CC lib/util/bit_array.o
00:02:45.416    CC lib/util/cpuset.o
00:02:45.416    CC lib/dma/dma.o
00:02:45.416    CC lib/util/crc16.o
00:02:45.416    CC lib/util/crc32.o
00:02:45.416    CC lib/util/crc64.o
00:02:45.416    CC lib/util/crc32c.o
00:02:45.416    CC lib/util/fd.o
00:02:45.416    CC lib/util/crc32_ieee.o
00:02:45.416    CC lib/util/dif.o
00:02:45.416    CXX lib/trace_parser/trace.o
00:02:45.416    CC lib/util/fd_group.o
00:02:45.416    CC lib/util/file.o
00:02:45.416    CC lib/util/hexlify.o
00:02:45.416    CC lib/util/iov.o
00:02:45.416    CC lib/util/math.o
00:02:45.416    CC lib/util/net.o
00:02:45.416    CC lib/util/pipe.o
00:02:45.416    CC lib/util/strerror_tls.o
00:02:45.416    CC lib/util/string.o
00:02:45.416    CC lib/util/uuid.o
00:02:45.416    CC lib/util/xor.o
00:02:45.416    CC lib/util/zipf.o
00:02:45.416    CC lib/util/md5.o
00:02:45.416    CC lib/vfio_user/host/vfio_user_pci.o
00:02:45.416    CC lib/vfio_user/host/vfio_user.o
00:02:45.673    LIB libspdk_dma.a
00:02:45.673    LIB libspdk_ioat.a
00:02:45.673    SO libspdk_dma.so.5.0
00:02:45.673    SO libspdk_ioat.so.7.0
00:02:45.673    SYMLINK libspdk_dma.so
00:02:45.673    SYMLINK libspdk_ioat.so
00:02:45.673    LIB libspdk_vfio_user.a
00:02:45.673    SO libspdk_vfio_user.so.5.0
00:02:45.929    SYMLINK libspdk_vfio_user.so
00:02:45.929    LIB libspdk_util.a
00:02:45.929    SO libspdk_util.so.10.1
00:02:46.186    LIB libspdk_trace_parser.a
00:02:46.186    SYMLINK libspdk_util.so
00:02:46.186    SO libspdk_trace_parser.so.6.0
00:02:46.186    SYMLINK libspdk_trace_parser.so
00:02:46.444    CC lib/conf/conf.o
00:02:46.444    CC lib/env_dpdk/env.o
00:02:46.444    CC lib/env_dpdk/memory.o
00:02:46.444    CC lib/env_dpdk/pci.o
00:02:46.444    CC lib/env_dpdk/init.o
00:02:46.444    CC lib/env_dpdk/threads.o
00:02:46.444    CC lib/env_dpdk/pci_ioat.o
00:02:46.444    CC lib/env_dpdk/pci_virtio.o
00:02:46.444    CC lib/env_dpdk/pci_event.o
00:02:46.444    CC lib/env_dpdk/pci_vmd.o
00:02:46.444    CC lib/env_dpdk/pci_idxd.o
00:02:46.444    CC lib/env_dpdk/pci_dpdk_2207.o
00:02:46.444    CC lib/env_dpdk/sigbus_handler.o
00:02:46.444    CC lib/env_dpdk/pci_dpdk.o
00:02:46.444    CC lib/env_dpdk/pci_dpdk_2211.o
00:02:46.444    CC lib/rdma_utils/rdma_utils.o
00:02:46.444    CC lib/idxd/idxd.o
00:02:46.444    CC lib/idxd/idxd_user.o
00:02:46.444    CC lib/vmd/vmd.o
00:02:46.444    CC lib/idxd/idxd_kernel.o
00:02:46.444    CC lib/vmd/led.o
00:02:46.444    CC lib/json/json_parse.o
00:02:46.444    CC lib/json/json_util.o
00:02:46.444    CC lib/json/json_write.o
00:02:46.701    LIB libspdk_conf.a
00:02:46.701    SO libspdk_conf.so.6.0
00:02:46.959    LIB libspdk_rdma_utils.a
00:02:46.959    LIB libspdk_json.a
00:02:46.959    SYMLINK libspdk_conf.so
00:02:46.959    SO libspdk_rdma_utils.so.1.0
00:02:46.959    SO libspdk_json.so.6.0
00:02:46.959    SYMLINK libspdk_rdma_utils.so
00:02:46.959    SYMLINK libspdk_json.so
00:02:47.216    LIB libspdk_idxd.a
00:02:47.216    LIB libspdk_vmd.a
00:02:47.217    SO libspdk_idxd.so.12.1
00:02:47.217    SO libspdk_vmd.so.6.0
00:02:47.217    SYMLINK libspdk_idxd.so
00:02:47.217    SYMLINK libspdk_vmd.so
00:02:47.474    CC lib/rdma_provider/common.o
00:02:47.474    CC lib/rdma_provider/rdma_provider_verbs.o
00:02:47.474    CC lib/jsonrpc/jsonrpc_server.o
00:02:47.474    CC lib/jsonrpc/jsonrpc_server_tcp.o
00:02:47.474    CC lib/jsonrpc/jsonrpc_client.o
00:02:47.474    CC lib/jsonrpc/jsonrpc_client_tcp.o
00:02:47.474    LIB libspdk_rdma_provider.a
00:02:47.733    SO libspdk_rdma_provider.so.7.0
00:02:47.733    LIB libspdk_jsonrpc.a
00:02:47.733    SYMLINK libspdk_rdma_provider.so
00:02:47.733    SO libspdk_jsonrpc.so.6.0
00:02:47.733    SYMLINK libspdk_jsonrpc.so
00:02:47.733    LIB libspdk_env_dpdk.a
00:02:47.991    SO libspdk_env_dpdk.so.15.1
00:02:47.991    SYMLINK libspdk_env_dpdk.so
00:02:48.250    CC lib/rpc/rpc.o
00:02:48.250    LIB libspdk_rpc.a
00:02:48.509    SO libspdk_rpc.so.6.0
00:02:48.509    SYMLINK libspdk_rpc.so
00:02:48.767    CC lib/trace/trace.o
00:02:48.767    CC lib/trace/trace_flags.o
00:02:48.767    CC lib/trace/trace_rpc.o
00:02:48.767    CC lib/notify/notify.o
00:02:48.767    CC lib/notify/notify_rpc.o
00:02:48.767    CC lib/keyring/keyring.o
00:02:48.767    CC lib/keyring/keyring_rpc.o
00:02:49.026    LIB libspdk_notify.a
00:02:49.026    SO libspdk_notify.so.6.0
00:02:49.026    LIB libspdk_keyring.a
00:02:49.026    LIB libspdk_trace.a
00:02:49.026    SO libspdk_keyring.so.2.0
00:02:49.026    SO libspdk_trace.so.11.0
00:02:49.026    SYMLINK libspdk_notify.so
00:02:49.285    SYMLINK libspdk_keyring.so
00:02:49.285    SYMLINK libspdk_trace.so
00:02:49.544    CC lib/thread/thread.o
00:02:49.544    CC lib/thread/iobuf.o
00:02:49.544    CC lib/sock/sock.o
00:02:49.544    CC lib/sock/sock_rpc.o
00:02:50.112    LIB libspdk_sock.a
00:02:50.112    SO libspdk_sock.so.10.0
00:02:50.112    SYMLINK libspdk_sock.so
00:02:50.678    CC lib/nvme/nvme_ctrlr_cmd.o
00:02:50.678    CC lib/nvme/nvme_ctrlr.o
00:02:50.678    CC lib/nvme/nvme_fabric.o
00:02:50.678    CC lib/nvme/nvme_ns_cmd.o
00:02:50.678    CC lib/nvme/nvme_ns.o
00:02:50.678    CC lib/nvme/nvme_pcie_common.o
00:02:50.678    CC lib/nvme/nvme_pcie.o
00:02:50.678    CC lib/nvme/nvme_quirks.o
00:02:50.678    CC lib/nvme/nvme_qpair.o
00:02:50.678    CC lib/nvme/nvme.o
00:02:50.678    CC lib/nvme/nvme_transport.o
00:02:50.678    CC lib/nvme/nvme_discovery.o
00:02:50.678    CC lib/nvme/nvme_ctrlr_ocssd_cmd.o
00:02:50.678    CC lib/nvme/nvme_opal.o
00:02:50.678    CC lib/nvme/nvme_ns_ocssd_cmd.o
00:02:50.678    CC lib/nvme/nvme_tcp.o
00:02:50.678    CC lib/nvme/nvme_zns.o
00:02:50.678    CC lib/nvme/nvme_io_msg.o
00:02:50.678    CC lib/nvme/nvme_poll_group.o
00:02:50.678    CC lib/nvme/nvme_auth.o
00:02:50.678    CC lib/nvme/nvme_stubs.o
00:02:50.678    CC lib/nvme/nvme_cuse.o
00:02:50.678    CC lib/nvme/nvme_rdma.o
00:02:50.935    LIB libspdk_thread.a
00:02:50.935    SO libspdk_thread.so.11.0
00:02:51.192    SYMLINK libspdk_thread.so
00:02:51.450    CC lib/init/json_config.o
00:02:51.450    CC lib/init/subsystem.o
00:02:51.450    CC lib/init/subsystem_rpc.o
00:02:51.450    CC lib/init/rpc.o
00:02:51.450    CC lib/accel/accel.o
00:02:51.450    CC lib/accel/accel_rpc.o
00:02:51.450    CC lib/accel/accel_sw.o
00:02:51.450    CC lib/fsdev/fsdev_rpc.o
00:02:51.450    CC lib/fsdev/fsdev.o
00:02:51.450    CC lib/fsdev/fsdev_io.o
00:02:51.450    CC lib/virtio/virtio.o
00:02:51.450    CC lib/virtio/virtio_vhost_user.o
00:02:51.450    CC lib/virtio/virtio_vfio_user.o
00:02:51.450    CC lib/virtio/virtio_pci.o
00:02:51.450    CC lib/blob/blobstore.o
00:02:51.450    CC lib/blob/request.o
00:02:51.450    CC lib/blob/zeroes.o
00:02:51.450    CC lib/blob/blob_bs_dev.o
00:02:51.708    LIB libspdk_init.a
00:02:51.708    SO libspdk_init.so.6.0
00:02:51.708    SYMLINK libspdk_init.so
00:02:51.708    LIB libspdk_virtio.a
00:02:51.965    SO libspdk_virtio.so.7.0
00:02:51.965    SYMLINK libspdk_virtio.so
00:02:51.965    LIB libspdk_fsdev.a
00:02:52.223    SO libspdk_fsdev.so.2.0
00:02:52.223    CC lib/event/app.o
00:02:52.223    CC lib/event/reactor.o
00:02:52.223    CC lib/event/log_rpc.o
00:02:52.223    CC lib/event/app_rpc.o
00:02:52.223    CC lib/event/scheduler_static.o
00:02:52.223    SYMLINK libspdk_fsdev.so
00:02:52.481    LIB libspdk_accel.a
00:02:52.481    SO libspdk_accel.so.16.0
00:02:52.481    LIB libspdk_nvme.a
00:02:52.481    CC lib/fuse_dispatcher/fuse_dispatcher.o
00:02:52.481    LIB libspdk_event.a
00:02:52.481    SYMLINK libspdk_accel.so
00:02:52.738    SO libspdk_event.so.14.0
00:02:52.738    SO libspdk_nvme.so.15.0
00:02:52.738    SYMLINK libspdk_event.so
00:02:52.996    SYMLINK libspdk_nvme.so
00:02:52.996    CC lib/bdev/bdev_zone.o
00:02:52.996    CC lib/bdev/bdev.o
00:02:52.996    CC lib/bdev/bdev_rpc.o
00:02:52.996    CC lib/bdev/part.o
00:02:52.996    CC lib/bdev/scsi_nvme.o
00:02:52.996    LIB libspdk_fuse_dispatcher.a
00:02:53.255    SO libspdk_fuse_dispatcher.so.1.0
00:02:53.255    SYMLINK libspdk_fuse_dispatcher.so
00:02:54.630    LIB libspdk_blob.a
00:02:54.630    SO libspdk_blob.so.12.0
00:02:54.630    SYMLINK libspdk_blob.so
00:02:54.889    CC lib/blobfs/blobfs.o
00:02:54.889    CC lib/blobfs/tree.o
00:02:54.889    CC lib/lvol/lvol.o
00:02:55.455    LIB libspdk_bdev.a
00:02:55.455    SO libspdk_bdev.so.17.0
00:02:55.455    SYMLINK libspdk_bdev.so
00:02:55.713    LIB libspdk_blobfs.a
00:02:55.713    SO libspdk_blobfs.so.11.0
00:02:55.972    SYMLINK libspdk_blobfs.so
00:02:55.972    LIB libspdk_lvol.a
00:02:55.972    SO libspdk_lvol.so.11.0
00:02:55.972    CC lib/ftl/ftl_core.o
00:02:55.972    CC lib/ftl/ftl_init.o
00:02:55.972    CC lib/ftl/ftl_layout.o
00:02:55.972    CC lib/ftl/ftl_io.o
00:02:55.972    CC lib/ftl/ftl_debug.o
00:02:55.972    CC lib/ftl/ftl_sb.o
00:02:55.972    CC lib/ftl/ftl_l2p.o
00:02:55.972    CC lib/ftl/ftl_l2p_flat.o
00:02:55.972    CC lib/ftl/ftl_band_ops.o
00:02:55.972    CC lib/ftl/ftl_nv_cache.o
00:02:55.972    CC lib/ftl/ftl_band.o
00:02:55.972    CC lib/nbd/nbd.o
00:02:55.972    CC lib/nbd/nbd_rpc.o
00:02:55.972    CC lib/ftl/ftl_writer.o
00:02:55.972    CC lib/ftl/ftl_reloc.o
00:02:55.972    CC lib/ftl/ftl_rq.o
00:02:55.972    CC lib/ftl/ftl_l2p_cache.o
00:02:55.972    CC lib/ftl/mngt/ftl_mngt.o
00:02:55.972    CC lib/ftl/ftl_p2l.o
00:02:55.972    CC lib/ftl/ftl_p2l_log.o
00:02:55.972    CC lib/ftl/mngt/ftl_mngt_bdev.o
00:02:55.972    CC lib/ftl/mngt/ftl_mngt_shutdown.o
00:02:55.972    CC lib/ftl/mngt/ftl_mngt_startup.o
00:02:55.972    CC lib/ftl/mngt/ftl_mngt_ioch.o
00:02:55.972    CC lib/ftl/mngt/ftl_mngt_md.o
00:02:55.972    CC lib/ftl/mngt/ftl_mngt_misc.o
00:02:55.972    CC lib/ftl/mngt/ftl_mngt_l2p.o
00:02:55.972    CC lib/ftl/mngt/ftl_mngt_band.o
00:02:55.972    CC lib/ftl/mngt/ftl_mngt_recovery.o
00:02:55.972    CC lib/ftl/mngt/ftl_mngt_self_test.o
00:02:55.972    CC lib/ftl/mngt/ftl_mngt_p2l.o
00:02:55.972    CC lib/ftl/mngt/ftl_mngt_upgrade.o
00:02:55.972    CC lib/ftl/utils/ftl_conf.o
00:02:55.972    CC lib/ftl/utils/ftl_md.o
00:02:55.972    CC lib/ftl/utils/ftl_bitmap.o
00:02:55.972    CC lib/ftl/utils/ftl_mempool.o
00:02:55.972    CC lib/ftl/utils/ftl_property.o
00:02:55.972    CC lib/ftl/utils/ftl_layout_tracker_bdev.o
00:02:55.972    CC lib/ublk/ublk.o
00:02:55.972    CC lib/ftl/upgrade/ftl_layout_upgrade.o
00:02:55.972    CC lib/ftl/upgrade/ftl_sb_upgrade.o
00:02:55.972    CC lib/ublk/ublk_rpc.o
00:02:55.972    CC lib/ftl/upgrade/ftl_p2l_upgrade.o
00:02:55.972    CC lib/ftl/upgrade/ftl_chunk_upgrade.o
00:02:55.972    CC lib/ftl/upgrade/ftl_band_upgrade.o
00:02:55.972    CC lib/scsi/dev.o
00:02:55.972    CC lib/ftl/upgrade/ftl_trim_upgrade.o
00:02:55.972    CC lib/ftl/upgrade/ftl_sb_v3.o
00:02:55.972    CC lib/scsi/port.o
00:02:55.972    CC lib/scsi/lun.o
00:02:55.972    CC lib/ftl/nvc/ftl_nvc_dev.o
00:02:55.972    CC lib/ftl/upgrade/ftl_sb_v5.o
00:02:55.972    CC lib/scsi/scsi.o
00:02:55.972    CC lib/ftl/nvc/ftl_nvc_bdev_vss.o
00:02:55.972    CC lib/scsi/scsi_bdev.o
00:02:55.972    CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o
00:02:55.972    CC lib/ftl/nvc/ftl_nvc_bdev_common.o
00:02:55.972    CC lib/scsi/scsi_pr.o
00:02:55.972    CC lib/ftl/base/ftl_base_dev.o
00:02:55.972    CC lib/scsi/task.o
00:02:55.972    CC lib/scsi/scsi_rpc.o
00:02:55.972    CC lib/ftl/base/ftl_base_bdev.o
00:02:55.972    CC lib/ftl/ftl_trace.o
00:02:55.972    CC lib/nvmf/ctrlr.o
00:02:55.972    CC lib/nvmf/ctrlr_discovery.o
00:02:55.972    CC lib/nvmf/subsystem.o
00:02:55.972    CC lib/nvmf/ctrlr_bdev.o
00:02:55.972    CC lib/nvmf/nvmf.o
00:02:55.972    CC lib/nvmf/nvmf_rpc.o
00:02:55.972    SYMLINK libspdk_lvol.so
00:02:55.972    CC lib/nvmf/transport.o
00:02:55.972    CC lib/nvmf/tcp.o
00:02:55.972    CC lib/nvmf/stubs.o
00:02:55.972    CC lib/nvmf/mdns_server.o
00:02:55.972    CC lib/nvmf/rdma.o
00:02:55.972    CC lib/nvmf/auth.o
00:02:56.539    LIB libspdk_scsi.a
00:02:56.539    LIB libspdk_nbd.a
00:02:56.798    SO libspdk_nbd.so.7.0
00:02:56.798    SO libspdk_scsi.so.9.0
00:02:56.798    LIB libspdk_ublk.a
00:02:56.798    SYMLINK libspdk_nbd.so
00:02:56.798    SYMLINK libspdk_scsi.so
00:02:56.798    SO libspdk_ublk.so.3.0
00:02:56.798    SYMLINK libspdk_ublk.so
00:02:57.056    LIB libspdk_ftl.a
00:02:57.056    CC lib/vhost/vhost.o
00:02:57.056    CC lib/vhost/vhost_rpc.o
00:02:57.056    CC lib/vhost/rte_vhost_user.o
00:02:57.056    CC lib/vhost/vhost_scsi.o
00:02:57.056    CC lib/vhost/vhost_blk.o
00:02:57.056    CC lib/iscsi/conn.o
00:02:57.056    CC lib/iscsi/init_grp.o
00:02:57.056    CC lib/iscsi/iscsi.o
00:02:57.056    CC lib/iscsi/param.o
00:02:57.056    CC lib/iscsi/portal_grp.o
00:02:57.056    CC lib/iscsi/iscsi_subsystem.o
00:02:57.056    CC lib/iscsi/tgt_node.o
00:02:57.056    CC lib/iscsi/task.o
00:02:57.056    CC lib/iscsi/iscsi_rpc.o
00:02:57.314    SO libspdk_ftl.so.9.0
00:02:57.583    SYMLINK libspdk_ftl.so
00:02:58.173    LIB libspdk_vhost.a
00:02:58.173    SO libspdk_vhost.so.8.0
00:02:58.173    SYMLINK libspdk_vhost.so
00:02:58.173    LIB libspdk_nvmf.a
00:02:58.430    SO libspdk_nvmf.so.20.0
00:02:58.430    LIB libspdk_iscsi.a
00:02:58.430    SO libspdk_iscsi.so.8.0
00:02:58.688    SYMLINK libspdk_nvmf.so
00:02:58.688    SYMLINK libspdk_iscsi.so
00:02:59.254    CC module/env_dpdk/env_dpdk_rpc.o
00:02:59.511    CC module/keyring/linux/keyring.o
00:02:59.511    CC module/keyring/linux/keyring_rpc.o
00:02:59.511    LIB libspdk_env_dpdk_rpc.a
00:02:59.511    CC module/blob/bdev/blob_bdev.o
00:02:59.511    CC module/scheduler/dynamic/scheduler_dynamic.o
00:02:59.511    CC module/scheduler/gscheduler/gscheduler.o
00:02:59.511    CC module/keyring/file/keyring.o
00:02:59.511    CC module/keyring/file/keyring_rpc.o
00:02:59.511    CC module/fsdev/aio/linux_aio_mgr.o
00:02:59.511    CC module/fsdev/aio/fsdev_aio.o
00:02:59.511    CC module/fsdev/aio/fsdev_aio_rpc.o
00:02:59.511    SO libspdk_env_dpdk_rpc.so.6.0
00:02:59.511    CC module/accel/dsa/accel_dsa.o
00:02:59.511    CC module/accel/dsa/accel_dsa_rpc.o
00:02:59.511    CC module/sock/posix/posix.o
00:02:59.511    CC module/scheduler/dpdk_governor/dpdk_governor.o
00:02:59.511    CC module/accel/error/accel_error.o
00:02:59.511    CC module/accel/iaa/accel_iaa.o
00:02:59.511    CC module/accel/error/accel_error_rpc.o
00:02:59.511    CC module/accel/iaa/accel_iaa_rpc.o
00:02:59.511    CC module/accel/ioat/accel_ioat.o
00:02:59.511    CC module/accel/ioat/accel_ioat_rpc.o
00:02:59.511    SYMLINK libspdk_env_dpdk_rpc.so
00:02:59.511    LIB libspdk_keyring_linux.a
00:02:59.769    LIB libspdk_keyring_file.a
00:02:59.769    SO libspdk_keyring_linux.so.1.0
00:02:59.769    LIB libspdk_scheduler_gscheduler.a
00:02:59.769    LIB libspdk_scheduler_dpdk_governor.a
00:02:59.769    LIB libspdk_scheduler_dynamic.a
00:02:59.769    SO libspdk_keyring_file.so.2.0
00:02:59.769    SO libspdk_scheduler_gscheduler.so.4.0
00:02:59.769    SO libspdk_scheduler_dpdk_governor.so.4.0
00:02:59.769    SO libspdk_scheduler_dynamic.so.4.0
00:02:59.769    SYMLINK libspdk_keyring_linux.so
00:02:59.769    LIB libspdk_accel_iaa.a
00:02:59.769    LIB libspdk_accel_ioat.a
00:02:59.769    LIB libspdk_blob_bdev.a
00:02:59.769    SYMLINK libspdk_keyring_file.so
00:02:59.769    LIB libspdk_accel_error.a
00:02:59.769    SYMLINK libspdk_scheduler_gscheduler.so
00:02:59.769    SYMLINK libspdk_scheduler_dpdk_governor.so
00:02:59.769    SO libspdk_accel_iaa.so.3.0
00:02:59.770    SO libspdk_accel_ioat.so.6.0
00:02:59.770    SYMLINK libspdk_scheduler_dynamic.so
00:02:59.770    SO libspdk_accel_error.so.2.0
00:02:59.770    SO libspdk_blob_bdev.so.12.0
00:02:59.770    LIB libspdk_accel_dsa.a
00:02:59.770    SO libspdk_accel_dsa.so.5.0
00:02:59.770    SYMLINK libspdk_accel_iaa.so
00:02:59.770    SYMLINK libspdk_accel_error.so
00:02:59.770    SYMLINK libspdk_accel_ioat.so
00:02:59.770    SYMLINK libspdk_blob_bdev.so
00:02:59.770    SYMLINK libspdk_accel_dsa.so
00:03:00.028    LIB libspdk_fsdev_aio.a
00:03:00.286    SO libspdk_fsdev_aio.so.1.0
00:03:00.286    LIB libspdk_sock_posix.a
00:03:00.286    SO libspdk_sock_posix.so.6.0
00:03:00.286    SYMLINK libspdk_fsdev_aio.so
00:03:00.286    SYMLINK libspdk_sock_posix.so
00:03:00.286    CC module/bdev/gpt/vbdev_gpt.o
00:03:00.286    CC module/bdev/gpt/gpt.o
00:03:00.286    CC module/blobfs/bdev/blobfs_bdev.o
00:03:00.286    CC module/bdev/null/bdev_null.o
00:03:00.286    CC module/bdev/null/bdev_null_rpc.o
00:03:00.286    CC module/blobfs/bdev/blobfs_bdev_rpc.o
00:03:00.545    CC module/bdev/delay/vbdev_delay.o
00:03:00.545    CC module/bdev/delay/vbdev_delay_rpc.o
00:03:00.545    CC module/bdev/passthru/vbdev_passthru.o
00:03:00.545    CC module/bdev/passthru/vbdev_passthru_rpc.o
00:03:00.545    CC module/bdev/malloc/bdev_malloc.o
00:03:00.545    CC module/bdev/malloc/bdev_malloc_rpc.o
00:03:00.545    CC module/bdev/error/vbdev_error.o
00:03:00.545    CC module/bdev/error/vbdev_error_rpc.o
00:03:00.545    CC module/bdev/split/vbdev_split.o
00:03:00.545    CC module/bdev/raid/bdev_raid_sb.o
00:03:00.545    CC module/bdev/raid/bdev_raid.o
00:03:00.545    CC module/bdev/raid/bdev_raid_rpc.o
00:03:00.545    CC module/bdev/split/vbdev_split_rpc.o
00:03:00.545    CC module/bdev/raid/raid0.o
00:03:00.545    CC module/bdev/raid/raid1.o
00:03:00.545    CC module/bdev/lvol/vbdev_lvol_rpc.o
00:03:00.545    CC module/bdev/raid/concat.o
00:03:00.545    CC module/bdev/lvol/vbdev_lvol.o
00:03:00.545    CC module/bdev/ftl/bdev_ftl.o
00:03:00.545    CC module/bdev/iscsi/bdev_iscsi.o
00:03:00.545    CC module/bdev/iscsi/bdev_iscsi_rpc.o
00:03:00.545    CC module/bdev/ftl/bdev_ftl_rpc.o
00:03:00.545    CC module/bdev/zone_block/vbdev_zone_block.o
00:03:00.545    CC module/bdev/zone_block/vbdev_zone_block_rpc.o
00:03:00.545    CC module/bdev/nvme/bdev_nvme.o
00:03:00.545    CC module/bdev/nvme/bdev_nvme_rpc.o
00:03:00.545    CC module/bdev/nvme/nvme_rpc.o
00:03:00.545    CC module/bdev/nvme/bdev_mdns_client.o
00:03:00.545    CC module/bdev/nvme/vbdev_opal.o
00:03:00.545    CC module/bdev/nvme/vbdev_opal_rpc.o
00:03:00.545    CC module/bdev/aio/bdev_aio_rpc.o
00:03:00.545    CC module/bdev/virtio/bdev_virtio_scsi.o
00:03:00.545    CC module/bdev/aio/bdev_aio.o
00:03:00.545    CC module/bdev/virtio/bdev_virtio_blk.o
00:03:00.545    CC module/bdev/nvme/bdev_nvme_cuse_rpc.o
00:03:00.545    CC module/bdev/virtio/bdev_virtio_rpc.o
00:03:00.545    LIB libspdk_blobfs_bdev.a
00:03:00.804    SO libspdk_blobfs_bdev.so.6.0
00:03:00.804    LIB libspdk_bdev_split.a
00:03:00.804    SO libspdk_bdev_split.so.6.0
00:03:00.804    LIB libspdk_bdev_error.a
00:03:00.804    LIB libspdk_bdev_gpt.a
00:03:00.804    LIB libspdk_bdev_null.a
00:03:00.804    SYMLINK libspdk_blobfs_bdev.so
00:03:00.804    SO libspdk_bdev_error.so.6.0
00:03:00.804    LIB libspdk_bdev_passthru.a
00:03:00.804    SO libspdk_bdev_null.so.6.0
00:03:00.804    SO libspdk_bdev_gpt.so.6.0
00:03:00.804    SYMLINK libspdk_bdev_split.so
00:03:00.804    LIB libspdk_bdev_ftl.a
00:03:00.804    LIB libspdk_bdev_zone_block.a
00:03:00.804    SYMLINK libspdk_bdev_error.so
00:03:00.804    LIB libspdk_bdev_delay.a
00:03:00.804    SO libspdk_bdev_passthru.so.6.0
00:03:00.804    LIB libspdk_bdev_aio.a
00:03:00.804    SYMLINK libspdk_bdev_gpt.so
00:03:00.804    SO libspdk_bdev_zone_block.so.6.0
00:03:00.804    SO libspdk_bdev_ftl.so.6.0
00:03:00.804    LIB libspdk_bdev_iscsi.a
00:03:00.804    LIB libspdk_bdev_malloc.a
00:03:00.804    SO libspdk_bdev_delay.so.6.0
00:03:00.804    SYMLINK libspdk_bdev_null.so
00:03:00.804    SO libspdk_bdev_aio.so.6.0
00:03:00.804    SYMLINK libspdk_bdev_passthru.so
00:03:00.804    SO libspdk_bdev_iscsi.so.6.0
00:03:00.804    SO libspdk_bdev_malloc.so.6.0
00:03:01.063    SYMLINK libspdk_bdev_zone_block.so
00:03:01.063    SYMLINK libspdk_bdev_ftl.so
00:03:01.063    SYMLINK libspdk_bdev_delay.so
00:03:01.063    SYMLINK libspdk_bdev_aio.so
00:03:01.063    SYMLINK libspdk_bdev_iscsi.so
00:03:01.063    SYMLINK libspdk_bdev_malloc.so
00:03:01.063    LIB libspdk_bdev_lvol.a
00:03:01.063    LIB libspdk_bdev_virtio.a
00:03:01.063    SO libspdk_bdev_lvol.so.6.0
00:03:01.063    SO libspdk_bdev_virtio.so.6.0
00:03:01.063    SYMLINK libspdk_bdev_lvol.so
00:03:01.063    SYMLINK libspdk_bdev_virtio.so
00:03:01.632    LIB libspdk_bdev_raid.a
00:03:01.632    SO libspdk_bdev_raid.so.6.0
00:03:01.632    SYMLINK libspdk_bdev_raid.so
00:03:03.010    LIB libspdk_bdev_nvme.a
00:03:03.010    SO libspdk_bdev_nvme.so.7.1
00:03:03.010    SYMLINK libspdk_bdev_nvme.so
00:03:03.947    CC module/event/subsystems/fsdev/fsdev.o
00:03:03.947    CC module/event/subsystems/scheduler/scheduler.o
00:03:03.947    CC module/event/subsystems/vhost_blk/vhost_blk.o
00:03:03.947    CC module/event/subsystems/vmd/vmd.o
00:03:03.947    CC module/event/subsystems/iobuf/iobuf.o
00:03:03.947    CC module/event/subsystems/iobuf/iobuf_rpc.o
00:03:03.947    CC module/event/subsystems/vmd/vmd_rpc.o
00:03:03.947    CC module/event/subsystems/keyring/keyring.o
00:03:03.947    CC module/event/subsystems/sock/sock.o
00:03:03.947    LIB libspdk_event_scheduler.a
00:03:03.947    LIB libspdk_event_fsdev.a
00:03:03.947    SO libspdk_event_scheduler.so.4.0
00:03:03.947    LIB libspdk_event_vhost_blk.a
00:03:03.947    LIB libspdk_event_sock.a
00:03:03.947    LIB libspdk_event_vmd.a
00:03:03.947    LIB libspdk_event_keyring.a
00:03:03.947    SO libspdk_event_fsdev.so.1.0
00:03:03.947    LIB libspdk_event_iobuf.a
00:03:03.947    SO libspdk_event_vhost_blk.so.3.0
00:03:03.947    SO libspdk_event_sock.so.5.0
00:03:03.947    SO libspdk_event_keyring.so.1.0
00:03:03.947    SO libspdk_event_iobuf.so.3.0
00:03:04.207    SO libspdk_event_vmd.so.6.0
00:03:04.207    SYMLINK libspdk_event_scheduler.so
00:03:04.207    SYMLINK libspdk_event_fsdev.so
00:03:04.207    SYMLINK libspdk_event_vhost_blk.so
00:03:04.207    SYMLINK libspdk_event_sock.so
00:03:04.207    SYMLINK libspdk_event_keyring.so
00:03:04.207    SYMLINK libspdk_event_vmd.so
00:03:04.207    SYMLINK libspdk_event_iobuf.so
00:03:04.466    CC module/event/subsystems/accel/accel.o
00:03:04.725    LIB libspdk_event_accel.a
00:03:04.725    SO libspdk_event_accel.so.6.0
00:03:04.725    SYMLINK libspdk_event_accel.so
00:03:05.291    CC module/event/subsystems/bdev/bdev.o
00:03:05.291    LIB libspdk_event_bdev.a
00:03:05.291    SO libspdk_event_bdev.so.6.0
00:03:05.549    SYMLINK libspdk_event_bdev.so
00:03:05.807    CC module/event/subsystems/nbd/nbd.o
00:03:05.807    CC module/event/subsystems/nvmf/nvmf_rpc.o
00:03:05.807    CC module/event/subsystems/nvmf/nvmf_tgt.o
00:03:05.807    CC module/event/subsystems/scsi/scsi.o
00:03:05.807    CC module/event/subsystems/ublk/ublk.o
00:03:06.066    LIB libspdk_event_nbd.a
00:03:06.066    SO libspdk_event_nbd.so.6.0
00:03:06.066    LIB libspdk_event_ublk.a
00:03:06.066    LIB libspdk_event_scsi.a
00:03:06.066    LIB libspdk_event_nvmf.a
00:03:06.066    SO libspdk_event_ublk.so.3.0
00:03:06.066    SYMLINK libspdk_event_nbd.so
00:03:06.066    SO libspdk_event_scsi.so.6.0
00:03:06.066    SO libspdk_event_nvmf.so.6.0
00:03:06.066    SYMLINK libspdk_event_ublk.so
00:03:06.066    SYMLINK libspdk_event_scsi.so
00:03:06.066    SYMLINK libspdk_event_nvmf.so
00:03:06.632    CC module/event/subsystems/iscsi/iscsi.o
00:03:06.632    CC module/event/subsystems/vhost_scsi/vhost_scsi.o
00:03:06.632    LIB libspdk_event_vhost_scsi.a
00:03:06.632    LIB libspdk_event_iscsi.a
00:03:06.632    SO libspdk_event_vhost_scsi.so.3.0
00:03:06.632    SO libspdk_event_iscsi.so.6.0
00:03:06.632    SYMLINK libspdk_event_vhost_scsi.so
00:03:06.632    SYMLINK libspdk_event_iscsi.so
00:03:06.890    SO libspdk.so.6.0
00:03:06.890    SYMLINK libspdk.so
00:03:07.471    TEST_HEADER include/spdk/accel.h
00:03:07.471    TEST_HEADER include/spdk/accel_module.h
00:03:07.471    CC test/rpc_client/rpc_client_test.o
00:03:07.471    TEST_HEADER include/spdk/barrier.h
00:03:07.471    TEST_HEADER include/spdk/assert.h
00:03:07.471    TEST_HEADER include/spdk/base64.h
00:03:07.471    TEST_HEADER include/spdk/bdev.h
00:03:07.471    TEST_HEADER include/spdk/bdev_module.h
00:03:07.471    TEST_HEADER include/spdk/bdev_zone.h
00:03:07.471    TEST_HEADER include/spdk/bit_array.h
00:03:07.471    TEST_HEADER include/spdk/bit_pool.h
00:03:07.471    TEST_HEADER include/spdk/blobfs_bdev.h
00:03:07.471    TEST_HEADER include/spdk/blob_bdev.h
00:03:07.471    TEST_HEADER include/spdk/blob.h
00:03:07.471    TEST_HEADER include/spdk/blobfs.h
00:03:07.471    TEST_HEADER include/spdk/conf.h
00:03:07.471    TEST_HEADER include/spdk/config.h
00:03:07.471    TEST_HEADER include/spdk/crc16.h
00:03:07.471    TEST_HEADER include/spdk/cpuset.h
00:03:07.471    TEST_HEADER include/spdk/crc32.h
00:03:07.471    TEST_HEADER include/spdk/dif.h
00:03:07.471    TEST_HEADER include/spdk/crc64.h
00:03:07.471    TEST_HEADER include/spdk/dma.h
00:03:07.471    TEST_HEADER include/spdk/env_dpdk.h
00:03:07.471    TEST_HEADER include/spdk/endian.h
00:03:07.471    TEST_HEADER include/spdk/env.h
00:03:07.471    CC app/spdk_nvme_discover/discovery_aer.o
00:03:07.471    CC app/spdk_top/spdk_top.o
00:03:07.471    TEST_HEADER include/spdk/event.h
00:03:07.471    CC app/trace_record/trace_record.o
00:03:07.471    TEST_HEADER include/spdk/fd.h
00:03:07.471    TEST_HEADER include/spdk/file.h
00:03:07.471    TEST_HEADER include/spdk/fd_group.h
00:03:07.471    CC app/spdk_nvme_perf/perf.o
00:03:07.471    TEST_HEADER include/spdk/fsdev.h
00:03:07.471    CXX app/trace/trace.o
00:03:07.471    TEST_HEADER include/spdk/ftl.h
00:03:07.471    TEST_HEADER include/spdk/fsdev_module.h
00:03:07.471    TEST_HEADER include/spdk/gpt_spec.h
00:03:07.471    TEST_HEADER include/spdk/hexlify.h
00:03:07.471    TEST_HEADER include/spdk/histogram_data.h
00:03:07.471    TEST_HEADER include/spdk/idxd_spec.h
00:03:07.471    TEST_HEADER include/spdk/idxd.h
00:03:07.471    TEST_HEADER include/spdk/init.h
00:03:07.471    CC app/spdk_lspci/spdk_lspci.o
00:03:07.471    TEST_HEADER include/spdk/ioat_spec.h
00:03:07.471    TEST_HEADER include/spdk/ioat.h
00:03:07.471    CC app/spdk_nvme_identify/identify.o
00:03:07.471    TEST_HEADER include/spdk/iscsi_spec.h
00:03:07.471    TEST_HEADER include/spdk/json.h
00:03:07.471    TEST_HEADER include/spdk/keyring.h
00:03:07.471    TEST_HEADER include/spdk/jsonrpc.h
00:03:07.471    TEST_HEADER include/spdk/keyring_module.h
00:03:07.471    TEST_HEADER include/spdk/md5.h
00:03:07.471    TEST_HEADER include/spdk/log.h
00:03:07.471    TEST_HEADER include/spdk/likely.h
00:03:07.471    TEST_HEADER include/spdk/memory.h
00:03:07.471    TEST_HEADER include/spdk/lvol.h
00:03:07.471    TEST_HEADER include/spdk/nbd.h
00:03:07.471    TEST_HEADER include/spdk/net.h
00:03:07.471    TEST_HEADER include/spdk/notify.h
00:03:07.471    TEST_HEADER include/spdk/mmio.h
00:03:07.471    TEST_HEADER include/spdk/nvme.h
00:03:07.471    TEST_HEADER include/spdk/nvme_intel.h
00:03:07.471    TEST_HEADER include/spdk/nvme_ocssd.h
00:03:07.471    TEST_HEADER include/spdk/nvme_spec.h
00:03:07.471    TEST_HEADER include/spdk/nvme_ocssd_spec.h
00:03:07.471    TEST_HEADER include/spdk/nvme_zns.h
00:03:07.471    TEST_HEADER include/spdk/nvmf_cmd.h
00:03:07.471    TEST_HEADER include/spdk/nvmf_fc_spec.h
00:03:07.471    TEST_HEADER include/spdk/nvmf_transport.h
00:03:07.471    TEST_HEADER include/spdk/nvmf.h
00:03:07.471    TEST_HEADER include/spdk/nvmf_spec.h
00:03:07.471    TEST_HEADER include/spdk/opal.h
00:03:07.471    TEST_HEADER include/spdk/opal_spec.h
00:03:07.471    TEST_HEADER include/spdk/pci_ids.h
00:03:07.471    TEST_HEADER include/spdk/queue.h
00:03:07.471    TEST_HEADER include/spdk/pipe.h
00:03:07.471    TEST_HEADER include/spdk/rpc.h
00:03:07.471    TEST_HEADER include/spdk/reduce.h
00:03:07.471    TEST_HEADER include/spdk/scsi.h
00:03:07.471    TEST_HEADER include/spdk/scheduler.h
00:03:07.471    TEST_HEADER include/spdk/sock.h
00:03:07.471    TEST_HEADER include/spdk/stdinc.h
00:03:07.471    TEST_HEADER include/spdk/scsi_spec.h
00:03:07.471    TEST_HEADER include/spdk/string.h
00:03:07.471    TEST_HEADER include/spdk/thread.h
00:03:07.471    TEST_HEADER include/spdk/trace.h
00:03:07.471    TEST_HEADER include/spdk/tree.h
00:03:07.471    TEST_HEADER include/spdk/trace_parser.h
00:03:07.471    CC app/nvmf_tgt/nvmf_main.o
00:03:07.471    TEST_HEADER include/spdk/util.h
00:03:07.471    TEST_HEADER include/spdk/ublk.h
00:03:07.471    CC app/iscsi_tgt/iscsi_tgt.o
00:03:07.471    TEST_HEADER include/spdk/uuid.h
00:03:07.471    TEST_HEADER include/spdk/version.h
00:03:07.471    TEST_HEADER include/spdk/vhost.h
00:03:07.471    TEST_HEADER include/spdk/vmd.h
00:03:07.471    TEST_HEADER include/spdk/vfio_user_spec.h
00:03:07.471    TEST_HEADER include/spdk/vfio_user_pci.h
00:03:07.471    TEST_HEADER include/spdk/zipf.h
00:03:07.471    CXX test/cpp_headers/accel.o
00:03:07.471    TEST_HEADER include/spdk/xor.h
00:03:07.471    CXX test/cpp_headers/accel_module.o
00:03:07.471    CXX test/cpp_headers/barrier.o
00:03:07.471    CXX test/cpp_headers/assert.o
00:03:07.471    CXX test/cpp_headers/base64.o
00:03:07.471    CXX test/cpp_headers/bdev_module.o
00:03:07.471    CXX test/cpp_headers/bdev.o
00:03:07.471    CXX test/cpp_headers/bdev_zone.o
00:03:07.471    CXX test/cpp_headers/bit_array.o
00:03:07.471    CXX test/cpp_headers/blob_bdev.o
00:03:07.471    CXX test/cpp_headers/bit_pool.o
00:03:07.471    CXX test/cpp_headers/blobfs_bdev.o
00:03:07.471    CXX test/cpp_headers/blob.o
00:03:07.471    CC app/spdk_tgt/spdk_tgt.o
00:03:07.471    CXX test/cpp_headers/blobfs.o
00:03:07.471    CXX test/cpp_headers/conf.o
00:03:07.471    CXX test/cpp_headers/config.o
00:03:07.471    CXX test/cpp_headers/cpuset.o
00:03:07.471    CC examples/interrupt_tgt/interrupt_tgt.o
00:03:07.471    CXX test/cpp_headers/crc16.o
00:03:07.471    CXX test/cpp_headers/dif.o
00:03:07.471    CXX test/cpp_headers/crc64.o
00:03:07.471    CXX test/cpp_headers/dma.o
00:03:07.471    CXX test/cpp_headers/crc32.o
00:03:07.471    CC app/spdk_dd/spdk_dd.o
00:03:07.471    CXX test/cpp_headers/endian.o
00:03:07.472    CXX test/cpp_headers/env_dpdk.o
00:03:07.472    CXX test/cpp_headers/event.o
00:03:07.472    CXX test/cpp_headers/env.o
00:03:07.472    CXX test/cpp_headers/fd.o
00:03:07.472    CXX test/cpp_headers/fd_group.o
00:03:07.472    CXX test/cpp_headers/file.o
00:03:07.472    CXX test/cpp_headers/fsdev_module.o
00:03:07.472    CXX test/cpp_headers/fsdev.o
00:03:07.472    CXX test/cpp_headers/gpt_spec.o
00:03:07.472    CXX test/cpp_headers/ftl.o
00:03:07.472    CXX test/cpp_headers/hexlify.o
00:03:07.472    CXX test/cpp_headers/histogram_data.o
00:03:07.472    CXX test/cpp_headers/idxd_spec.o
00:03:07.472    CXX test/cpp_headers/idxd.o
00:03:07.472    CXX test/cpp_headers/init.o
00:03:07.472    CXX test/cpp_headers/iscsi_spec.o
00:03:07.472    CXX test/cpp_headers/ioat_spec.o
00:03:07.472    CXX test/cpp_headers/ioat.o
00:03:07.472    CXX test/cpp_headers/jsonrpc.o
00:03:07.472    CXX test/cpp_headers/json.o
00:03:07.472    CXX test/cpp_headers/keyring_module.o
00:03:07.472    CXX test/cpp_headers/keyring.o
00:03:07.472    CXX test/cpp_headers/log.o
00:03:07.472    CXX test/cpp_headers/likely.o
00:03:07.472    CXX test/cpp_headers/md5.o
00:03:07.472    CXX test/cpp_headers/lvol.o
00:03:07.472    CXX test/cpp_headers/memory.o
00:03:07.472    CXX test/cpp_headers/nbd.o
00:03:07.472    CXX test/cpp_headers/mmio.o
00:03:07.472    CXX test/cpp_headers/net.o
00:03:07.472    CXX test/cpp_headers/notify.o
00:03:07.472    CXX test/cpp_headers/nvme_intel.o
00:03:07.472    CXX test/cpp_headers/nvme.o
00:03:07.472    CXX test/cpp_headers/nvme_ocssd.o
00:03:07.472    CXX test/cpp_headers/nvme_ocssd_spec.o
00:03:07.472    CXX test/cpp_headers/nvme_zns.o
00:03:07.472    CXX test/cpp_headers/nvme_spec.o
00:03:07.472    CXX test/cpp_headers/nvmf_cmd.o
00:03:07.472    CXX test/cpp_headers/nvmf_fc_spec.o
00:03:07.472    CXX test/cpp_headers/nvmf.o
00:03:07.472    CXX test/cpp_headers/nvmf_spec.o
00:03:07.472    CXX test/cpp_headers/nvmf_transport.o
00:03:07.472    CXX test/cpp_headers/opal.o
00:03:07.472    CXX test/cpp_headers/opal_spec.o
00:03:07.472    CXX test/cpp_headers/pci_ids.o
00:03:07.472    CXX test/cpp_headers/pipe.o
00:03:07.472    CXX test/cpp_headers/reduce.o
00:03:07.472    CXX test/cpp_headers/queue.o
00:03:07.472    CXX test/cpp_headers/rpc.o
00:03:07.472    CXX test/cpp_headers/scheduler.o
00:03:07.472    CXX test/cpp_headers/scsi_spec.o
00:03:07.472    CXX test/cpp_headers/scsi.o
00:03:07.472    CXX test/cpp_headers/sock.o
00:03:07.472    CXX test/cpp_headers/stdinc.o
00:03:07.472    CXX test/cpp_headers/string.o
00:03:07.472    CXX test/cpp_headers/thread.o
00:03:07.472    CXX test/cpp_headers/trace.o
00:03:07.472    CXX test/cpp_headers/trace_parser.o
00:03:07.472    CXX test/cpp_headers/tree.o
00:03:07.472    CXX test/cpp_headers/ublk.o
00:03:07.756    CXX test/cpp_headers/util.o
00:03:07.756    CC test/thread/poller_perf/poller_perf.o
00:03:07.756    CC examples/util/zipf/zipf.o
00:03:07.756    CC test/env/pci/pci_ut.o
00:03:07.756    CC test/app/stub/stub.o
00:03:07.756    CC test/env/vtophys/vtophys.o
00:03:07.756    CXX test/cpp_headers/uuid.o
00:03:07.756    CC examples/ioat/perf/perf.o
00:03:07.756    CC test/env/memory/memory_ut.o
00:03:07.756    CC examples/ioat/verify/verify.o
00:03:07.756    CC test/app/histogram_perf/histogram_perf.o
00:03:07.756    CC test/app/jsoncat/jsoncat.o
00:03:07.756    CC test/env/env_dpdk_post_init/env_dpdk_post_init.o
00:03:07.756    CC app/fio/nvme/fio_plugin.o
00:03:07.756    CC test/app/bdev_svc/bdev_svc.o
00:03:07.756    CC test/dma/test_dma/test_dma.o
00:03:07.756    CC app/fio/bdev/fio_plugin.o
00:03:08.030    LINK spdk_lspci
00:03:08.030    LINK rpc_client_test
00:03:08.300    CC test/env/mem_callbacks/mem_callbacks.o
00:03:08.300    LINK nvmf_tgt
00:03:08.300    LINK spdk_nvme_discover
00:03:08.300    LINK iscsi_tgt
00:03:08.300    LINK interrupt_tgt
00:03:08.300    CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o
00:03:08.300    CXX test/cpp_headers/version.o
00:03:08.300    CXX test/cpp_headers/vfio_user_pci.o
00:03:08.300    CXX test/cpp_headers/vfio_user_spec.o
00:03:08.300    CXX test/cpp_headers/vhost.o
00:03:08.300    CXX test/cpp_headers/vmd.o
00:03:08.300    LINK vtophys
00:03:08.300    CXX test/cpp_headers/xor.o
00:03:08.300    CXX test/cpp_headers/zipf.o
00:03:08.300    CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o
00:03:08.300    LINK histogram_perf
00:03:08.300    LINK poller_perf
00:03:08.300    LINK spdk_tgt
00:03:08.300    LINK spdk_trace_record
00:03:08.300    LINK zipf
00:03:08.300    LINK jsoncat
00:03:08.300    LINK env_dpdk_post_init
00:03:08.300    LINK stub
00:03:08.559    LINK bdev_svc
00:03:08.559    LINK ioat_perf
00:03:08.559    CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o
00:03:08.559    CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o
00:03:08.559    LINK verify
00:03:08.559    LINK spdk_dd
00:03:08.559    LINK spdk_trace
00:03:08.559    LINK pci_ut
00:03:08.818    LINK test_dma
00:03:08.818    LINK spdk_bdev
00:03:08.818    LINK nvme_fuzz
00:03:08.818    LINK spdk_nvme
00:03:08.818    LINK mem_callbacks
00:03:08.818    CC test/event/reactor/reactor.o
00:03:08.818    CC test/event/event_perf/event_perf.o
00:03:08.818    CC test/event/reactor_perf/reactor_perf.o
00:03:08.818    CC test/event/app_repeat/app_repeat.o
00:03:08.818    LINK vhost_fuzz
00:03:08.818    CC test/event/scheduler/scheduler.o
00:03:09.077    LINK spdk_nvme_perf
00:03:09.077    CC app/vhost/vhost.o
00:03:09.077    CC examples/vmd/led/led.o
00:03:09.077    CC examples/vmd/lsvmd/lsvmd.o
00:03:09.077    CC examples/idxd/perf/perf.o
00:03:09.077    CC examples/sock/hello_world/hello_sock.o
00:03:09.077    LINK spdk_nvme_identify
00:03:09.077    CC examples/thread/thread/thread_ex.o
00:03:09.077    LINK spdk_top
00:03:09.077    LINK event_perf
00:03:09.077    LINK reactor_perf
00:03:09.077    LINK reactor
00:03:09.077    LINK app_repeat
00:03:09.077    LINK lsvmd
00:03:09.077    LINK led
00:03:09.077    LINK scheduler
00:03:09.336    LINK vhost
00:03:09.336    LINK hello_sock
00:03:09.336    CC test/nvme/overhead/overhead.o
00:03:09.336    CC test/nvme/reserve/reserve.o
00:03:09.336    LINK thread
00:03:09.336    CC test/nvme/compliance/nvme_compliance.o
00:03:09.336    CC test/nvme/startup/startup.o
00:03:09.336    CC test/nvme/aer/aer.o
00:03:09.336    CC test/nvme/reset/reset.o
00:03:09.336    CC test/nvme/e2edp/nvme_dp.o
00:03:09.336    CC test/nvme/simple_copy/simple_copy.o
00:03:09.336    CC test/nvme/boot_partition/boot_partition.o
00:03:09.336    CC test/nvme/err_injection/err_injection.o
00:03:09.336    CC test/nvme/sgl/sgl.o
00:03:09.336    CC test/nvme/fused_ordering/fused_ordering.o
00:03:09.336    CC test/nvme/connect_stress/connect_stress.o
00:03:09.336    CC test/nvme/doorbell_aers/doorbell_aers.o
00:03:09.336    CC test/accel/dif/dif.o
00:03:09.336    CC test/nvme/cuse/cuse.o
00:03:09.336    CC test/nvme/fdp/fdp.o
00:03:09.336    CC test/blobfs/mkfs/mkfs.o
00:03:09.336    LINK idxd_perf
00:03:09.336    LINK memory_ut
00:03:09.595    CC test/lvol/esnap/esnap.o
00:03:09.595    LINK boot_partition
00:03:09.595    LINK startup
00:03:09.595    LINK err_injection
00:03:09.595    LINK reserve
00:03:09.595    LINK doorbell_aers
00:03:09.595    LINK connect_stress
00:03:09.595    LINK fused_ordering
00:03:09.595    LINK mkfs
00:03:09.595    LINK overhead
00:03:09.595    LINK simple_copy
00:03:09.595    LINK reset
00:03:09.595    LINK sgl
00:03:09.595    LINK nvme_dp
00:03:09.595    LINK aer
00:03:09.595    LINK nvme_compliance
00:03:09.595    LINK fdp
00:03:09.853    CC examples/nvme/reconnect/reconnect.o
00:03:09.853    CC examples/nvme/hello_world/hello_world.o
00:03:09.853    CC examples/nvme/abort/abort.o
00:03:09.853    CC examples/nvme/arbitration/arbitration.o
00:03:09.853    CC examples/nvme/nvme_manage/nvme_manage.o
00:03:09.853    CC examples/nvme/pmr_persistence/pmr_persistence.o
00:03:09.853    CC examples/nvme/cmb_copy/cmb_copy.o
00:03:09.853    CC examples/nvme/hotplug/hotplug.o
00:03:09.853    CC examples/accel/perf/accel_perf.o
00:03:09.853    CC examples/fsdev/hello_world/hello_fsdev.o
00:03:09.853    CC examples/blob/cli/blobcli.o
00:03:09.853    CC examples/blob/hello_world/hello_blob.o
00:03:10.112    LINK pmr_persistence
00:03:10.112    LINK cmb_copy
00:03:10.112    LINK hello_world
00:03:10.112    LINK hotplug
00:03:10.112    LINK dif
00:03:10.112    LINK iscsi_fuzz
00:03:10.112    LINK arbitration
00:03:10.112    LINK reconnect
00:03:10.112    LINK hello_blob
00:03:10.112    LINK abort
00:03:10.112    LINK hello_fsdev
00:03:10.371    LINK nvme_manage
00:03:10.371    LINK accel_perf
00:03:10.371    LINK blobcli
00:03:10.630    LINK cuse
00:03:10.630    CC test/bdev/bdevio/bdevio.o
00:03:10.889    CC examples/bdev/bdevperf/bdevperf.o
00:03:10.889    CC examples/bdev/hello_world/hello_bdev.o
00:03:11.148    LINK bdevio
00:03:11.148    LINK hello_bdev
00:03:11.716    LINK bdevperf
00:03:12.284    CC examples/nvmf/nvmf/nvmf.o
00:03:12.543    LINK nvmf
00:03:14.449    LINK esnap
00:03:14.449  
00:03:14.449  real	1m0.197s
00:03:14.449  user	8m19.279s
00:03:14.449  sys	4m24.062s
00:03:14.449   13:29:14 make -- common/autotest_common.sh@1130 -- $ xtrace_disable
00:03:14.449   13:29:14 make -- common/autotest_common.sh@10 -- $ set +x
00:03:14.449  ************************************
00:03:14.449  END TEST make
00:03:14.449  ************************************
00:03:14.708   13:29:14  -- spdk/autobuild.sh@1 -- $ stop_monitor_resources
00:03:14.708   13:29:14  -- pm/common@29 -- $ signal_monitor_resources TERM
00:03:14.708   13:29:14  -- pm/common@40 -- $ local monitor pid pids signal=TERM
00:03:14.708   13:29:14  -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:03:14.708   13:29:14  -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]]
00:03:14.708   13:29:14  -- pm/common@44 -- $ pid=3025343
00:03:14.708   13:29:14  -- pm/common@50 -- $ kill -TERM 3025343
00:03:14.708   13:29:14  -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:03:14.708   13:29:14  -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]]
00:03:14.708   13:29:14  -- pm/common@44 -- $ pid=3025345
00:03:14.708   13:29:14  -- pm/common@50 -- $ kill -TERM 3025345
00:03:14.708   13:29:14  -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:03:14.708   13:29:14  -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]]
00:03:14.708   13:29:14  -- pm/common@44 -- $ pid=3025346
00:03:14.708   13:29:14  -- pm/common@50 -- $ kill -TERM 3025346
00:03:14.708   13:29:14  -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:03:14.708   13:29:14  -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]]
00:03:14.708   13:29:14  -- pm/common@44 -- $ pid=3025371
00:03:14.708   13:29:14  -- pm/common@50 -- $ sudo -E kill -TERM 3025371
00:03:14.708   13:29:14  -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 ))
00:03:14.708   13:29:14  -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf
00:03:14.708    13:29:14  -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:03:14.708     13:29:14  -- common/autotest_common.sh@1711 -- # lcov --version
00:03:14.708     13:29:14  -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:03:14.708    13:29:14  -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:03:14.708    13:29:14  -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:03:14.708    13:29:14  -- scripts/common.sh@333 -- # local ver1 ver1_l
00:03:14.708    13:29:14  -- scripts/common.sh@334 -- # local ver2 ver2_l
00:03:14.708    13:29:14  -- scripts/common.sh@336 -- # IFS=.-:
00:03:14.708    13:29:14  -- scripts/common.sh@336 -- # read -ra ver1
00:03:14.708    13:29:14  -- scripts/common.sh@337 -- # IFS=.-:
00:03:14.708    13:29:14  -- scripts/common.sh@337 -- # read -ra ver2
00:03:14.708    13:29:14  -- scripts/common.sh@338 -- # local 'op=<'
00:03:14.708    13:29:14  -- scripts/common.sh@340 -- # ver1_l=2
00:03:14.708    13:29:14  -- scripts/common.sh@341 -- # ver2_l=1
00:03:14.708    13:29:14  -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:03:14.708    13:29:14  -- scripts/common.sh@344 -- # case "$op" in
00:03:14.708    13:29:14  -- scripts/common.sh@345 -- # : 1
00:03:14.708    13:29:14  -- scripts/common.sh@364 -- # (( v = 0 ))
00:03:14.708    13:29:14  -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:03:14.708     13:29:14  -- scripts/common.sh@365 -- # decimal 1
00:03:14.708     13:29:14  -- scripts/common.sh@353 -- # local d=1
00:03:14.708     13:29:14  -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:03:14.708     13:29:14  -- scripts/common.sh@355 -- # echo 1
00:03:14.708    13:29:14  -- scripts/common.sh@365 -- # ver1[v]=1
00:03:14.708     13:29:14  -- scripts/common.sh@366 -- # decimal 2
00:03:14.708     13:29:14  -- scripts/common.sh@353 -- # local d=2
00:03:14.708     13:29:14  -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:03:14.708     13:29:14  -- scripts/common.sh@355 -- # echo 2
00:03:14.708    13:29:14  -- scripts/common.sh@366 -- # ver2[v]=2
00:03:14.708    13:29:14  -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:03:14.708    13:29:14  -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:03:14.708    13:29:14  -- scripts/common.sh@368 -- # return 0
00:03:14.708    13:29:14  -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:03:14.708    13:29:14  -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:03:14.708  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:14.708  		--rc genhtml_branch_coverage=1
00:03:14.708  		--rc genhtml_function_coverage=1
00:03:14.708  		--rc genhtml_legend=1
00:03:14.708  		--rc geninfo_all_blocks=1
00:03:14.708  		--rc geninfo_unexecuted_blocks=1
00:03:14.708  		
00:03:14.708  		'
00:03:14.708    13:29:14  -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:03:14.708  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:14.708  		--rc genhtml_branch_coverage=1
00:03:14.708  		--rc genhtml_function_coverage=1
00:03:14.708  		--rc genhtml_legend=1
00:03:14.708  		--rc geninfo_all_blocks=1
00:03:14.708  		--rc geninfo_unexecuted_blocks=1
00:03:14.708  		
00:03:14.708  		'
00:03:14.708    13:29:14  -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:03:14.708  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:14.708  		--rc genhtml_branch_coverage=1
00:03:14.708  		--rc genhtml_function_coverage=1
00:03:14.708  		--rc genhtml_legend=1
00:03:14.708  		--rc geninfo_all_blocks=1
00:03:14.708  		--rc geninfo_unexecuted_blocks=1
00:03:14.708  		
00:03:14.708  		'
00:03:14.708    13:29:14  -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:03:14.708  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:14.708  		--rc genhtml_branch_coverage=1
00:03:14.708  		--rc genhtml_function_coverage=1
00:03:14.708  		--rc genhtml_legend=1
00:03:14.708  		--rc geninfo_all_blocks=1
00:03:14.708  		--rc geninfo_unexecuted_blocks=1
00:03:14.708  		
00:03:14.708  		'
00:03:14.708   13:29:14  -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh
00:03:14.708     13:29:14  -- nvmf/common.sh@7 -- # uname -s
00:03:14.708    13:29:14  -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:03:14.708    13:29:14  -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:03:14.708    13:29:14  -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:03:14.708    13:29:14  -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:03:14.708    13:29:14  -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:03:14.708    13:29:14  -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:03:14.708    13:29:14  -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:03:14.708    13:29:14  -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:03:14.708    13:29:14  -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:03:14.708     13:29:14  -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:03:14.968    13:29:14  -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:03:14.968    13:29:14  -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e
00:03:14.968    13:29:14  -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:03:14.968    13:29:14  -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:03:14.968    13:29:14  -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:03:14.968    13:29:14  -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:03:14.968    13:29:14  -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh
00:03:14.968     13:29:14  -- scripts/common.sh@15 -- # shopt -s extglob
00:03:14.968     13:29:14  -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:03:14.968     13:29:14  -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:03:14.968     13:29:14  -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:03:14.968      13:29:14  -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:03:14.968      13:29:14  -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:03:14.968      13:29:14  -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:03:14.968      13:29:14  -- paths/export.sh@5 -- # export PATH
00:03:14.968      13:29:14  -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:03:14.968    13:29:14  -- nvmf/common.sh@51 -- # : 0
00:03:14.968    13:29:14  -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:03:14.968    13:29:14  -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:03:14.968    13:29:14  -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:03:14.968    13:29:14  -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:03:14.968    13:29:14  -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:03:14.968    13:29:14  -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:03:14.968  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:03:14.968    13:29:14  -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:03:14.968    13:29:14  -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:03:14.968    13:29:14  -- nvmf/common.sh@55 -- # have_pci_nics=0
00:03:14.968   13:29:14  -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']'
00:03:14.968    13:29:14  -- spdk/autotest.sh@32 -- # uname -s
00:03:14.968   13:29:14  -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']'
00:03:14.968   13:29:14  -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h'
00:03:14.968   13:29:14  -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps
00:03:14.968   13:29:14  -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/core-collector.sh %P %s %t'
00:03:14.968   13:29:14  -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps
00:03:14.968   13:29:14  -- spdk/autotest.sh@44 -- # modprobe nbd
00:03:14.968    13:29:14  -- spdk/autotest.sh@46 -- # type -P udevadm
00:03:14.968   13:29:14  -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm
00:03:14.968   13:29:14  -- spdk/autotest.sh@48 -- # udevadm_pid=3091123
00:03:14.968   13:29:14  -- spdk/autotest.sh@53 -- # start_monitor_resources
00:03:14.968   13:29:14  -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property
00:03:14.968   13:29:14  -- pm/common@17 -- # local monitor
00:03:14.968   13:29:14  -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}"
00:03:14.968   13:29:14  -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}"
00:03:14.968   13:29:14  -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}"
00:03:14.968    13:29:14  -- pm/common@21 -- # date +%s
00:03:14.968   13:29:14  -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}"
00:03:14.968    13:29:14  -- pm/common@21 -- # date +%s
00:03:14.968   13:29:14  -- pm/common@25 -- # sleep 1
00:03:14.968    13:29:14  -- pm/common@21 -- # date +%s
00:03:14.968    13:29:14  -- pm/common@21 -- # date +%s
00:03:14.968   13:29:14  -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734179354
00:03:14.968   13:29:14  -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734179354
00:03:14.968   13:29:14  -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734179354
00:03:14.968   13:29:14  -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734179354
00:03:14.968  Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734179354_collect-vmstat.pm.log
00:03:14.968  Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734179354_collect-cpu-load.pm.log
00:03:14.968  Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734179354_collect-cpu-temp.pm.log
00:03:14.968  Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734179354_collect-bmc-pm.bmc.pm.log
00:03:15.906   13:29:15  -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT
00:03:15.906   13:29:15  -- spdk/autotest.sh@57 -- # timing_enter autotest
00:03:15.906   13:29:15  -- common/autotest_common.sh@726 -- # xtrace_disable
00:03:15.906   13:29:15  -- common/autotest_common.sh@10 -- # set +x
00:03:15.906   13:29:15  -- spdk/autotest.sh@59 -- # create_test_list
00:03:15.906   13:29:15  -- common/autotest_common.sh@752 -- # xtrace_disable
00:03:15.906   13:29:15  -- common/autotest_common.sh@10 -- # set +x
00:03:15.906     13:29:15  -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh
00:03:15.906    13:29:15  -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk
00:03:15.906   13:29:15  -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-phy-autotest/spdk
00:03:15.906   13:29:15  -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output
00:03:15.906   13:29:15  -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-phy-autotest/spdk
00:03:15.906   13:29:15  -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod
00:03:15.906    13:29:15  -- common/autotest_common.sh@1457 -- # uname
00:03:15.906   13:29:15  -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']'
00:03:15.906   13:29:15  -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf
00:03:15.906    13:29:15  -- common/autotest_common.sh@1477 -- # uname
00:03:15.906   13:29:15  -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]]
00:03:15.906   13:29:15  -- spdk/autotest.sh@68 -- # [[ y == y ]]
00:03:15.906   13:29:15  -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version
00:03:15.906  lcov: LCOV version 1.15
00:03:15.906   13:29:15  -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info
00:03:37.846  /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found
00:03:37.846  geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno
00:03:41.206   13:29:40  -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup
00:03:41.206   13:29:40  -- common/autotest_common.sh@726 -- # xtrace_disable
00:03:41.206   13:29:40  -- common/autotest_common.sh@10 -- # set +x
00:03:41.206   13:29:40  -- spdk/autotest.sh@78 -- # rm -f
00:03:41.206   13:29:40  -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset
00:03:44.500  0000:00:04.7 (8086 2021): Already using the ioatdma driver
00:03:44.500  0000:00:04.6 (8086 2021): Already using the ioatdma driver
00:03:44.500  0000:00:04.5 (8086 2021): Already using the ioatdma driver
00:03:44.500  0000:00:04.4 (8086 2021): Already using the ioatdma driver
00:03:44.500  0000:00:04.3 (8086 2021): Already using the ioatdma driver
00:03:44.500  0000:00:04.2 (8086 2021): Already using the ioatdma driver
00:03:44.500  0000:00:04.1 (8086 2021): Already using the ioatdma driver
00:03:44.500  0000:00:04.0 (8086 2021): Already using the ioatdma driver
00:03:44.500  0000:80:04.7 (8086 2021): Already using the ioatdma driver
00:03:44.500  0000:80:04.6 (8086 2021): Already using the ioatdma driver
00:03:44.500  0000:80:04.5 (8086 2021): Already using the ioatdma driver
00:03:44.500  0000:80:04.4 (8086 2021): Already using the ioatdma driver
00:03:44.500  0000:80:04.3 (8086 2021): Already using the ioatdma driver
00:03:44.500  0000:80:04.2 (8086 2021): Already using the ioatdma driver
00:03:44.500  0000:80:04.1 (8086 2021): Already using the ioatdma driver
00:03:44.500  0000:80:04.0 (8086 2021): Already using the ioatdma driver
00:03:44.500  0000:d8:00.0 (8086 0a54): Already using the nvme driver
00:03:44.500   13:29:44  -- spdk/autotest.sh@83 -- # get_zoned_devs
00:03:44.500   13:29:44  -- common/autotest_common.sh@1657 -- # zoned_devs=()
00:03:44.500   13:29:44  -- common/autotest_common.sh@1657 -- # local -gA zoned_devs
00:03:44.500   13:29:44  -- common/autotest_common.sh@1658 -- # zoned_ctrls=()
00:03:44.500   13:29:44  -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls
00:03:44.500   13:29:44  -- common/autotest_common.sh@1659 -- # local nvme bdf ns
00:03:44.500   13:29:44  -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme*
00:03:44.500   13:29:44  -- common/autotest_common.sh@1669 -- # bdf=0000:d8:00.0
00:03:44.500   13:29:44  -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n*
00:03:44.500   13:29:44  -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1
00:03:44.500   13:29:44  -- common/autotest_common.sh@1650 -- # local device=nvme0n1
00:03:44.500   13:29:44  -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]]
00:03:44.500   13:29:44  -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:03:44.500   13:29:44  -- spdk/autotest.sh@85 -- # (( 0 > 0 ))
00:03:44.500   13:29:44  -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*)
00:03:44.500   13:29:44  -- spdk/autotest.sh@99 -- # [[ -z '' ]]
00:03:44.500   13:29:44  -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1
00:03:44.500   13:29:44  -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt
00:03:44.500   13:29:44  -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1
00:03:44.759  No valid GPT data, bailing
00:03:44.759    13:29:44  -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1
00:03:44.759   13:29:44  -- scripts/common.sh@394 -- # pt=
00:03:44.759   13:29:44  -- scripts/common.sh@395 -- # return 1
00:03:44.759   13:29:44  -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1
00:03:44.759  1+0 records in
00:03:44.759  1+0 records out
00:03:44.759  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00475459 s, 221 MB/s
00:03:44.759   13:29:44  -- spdk/autotest.sh@105 -- # sync
00:03:44.759   13:29:44  -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes
00:03:44.759   13:29:44  -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null'
00:03:44.759    13:29:44  -- common/autotest_common.sh@22 -- # reap_spdk_processes
00:03:51.331    13:29:50  -- spdk/autotest.sh@111 -- # uname -s
00:03:51.331   13:29:50  -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]]
00:03:51.331   13:29:50  -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]]
00:03:51.331   13:29:50  -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status
00:03:54.623  Hugepages
00:03:54.623  node     hugesize     free /  total
00:03:54.623  node0   1048576kB        0 /      0
00:03:54.623  node0      2048kB        0 /      0
00:03:54.623  node1   1048576kB        0 /      0
00:03:54.623  node1      2048kB        0 /      0
00:03:54.623  
00:03:54.623  Type                      BDF             Vendor Device NUMA    Driver           Device     Block devices
00:03:54.623  I/OAT                     0000:00:04.0    8086   2021   0       ioatdma          -          -
00:03:54.623  I/OAT                     0000:00:04.1    8086   2021   0       ioatdma          -          -
00:03:54.623  I/OAT                     0000:00:04.2    8086   2021   0       ioatdma          -          -
00:03:54.623  I/OAT                     0000:00:04.3    8086   2021   0       ioatdma          -          -
00:03:54.623  I/OAT                     0000:00:04.4    8086   2021   0       ioatdma          -          -
00:03:54.623  I/OAT                     0000:00:04.5    8086   2021   0       ioatdma          -          -
00:03:54.623  I/OAT                     0000:00:04.6    8086   2021   0       ioatdma          -          -
00:03:54.623  I/OAT                     0000:00:04.7    8086   2021   0       ioatdma          -          -
00:03:54.623  I/OAT                     0000:80:04.0    8086   2021   1       ioatdma          -          -
00:03:54.623  I/OAT                     0000:80:04.1    8086   2021   1       ioatdma          -          -
00:03:54.623  I/OAT                     0000:80:04.2    8086   2021   1       ioatdma          -          -
00:03:54.623  I/OAT                     0000:80:04.3    8086   2021   1       ioatdma          -          -
00:03:54.623  I/OAT                     0000:80:04.4    8086   2021   1       ioatdma          -          -
00:03:54.623  I/OAT                     0000:80:04.5    8086   2021   1       ioatdma          -          -
00:03:54.623  I/OAT                     0000:80:04.6    8086   2021   1       ioatdma          -          -
00:03:54.623  I/OAT                     0000:80:04.7    8086   2021   1       ioatdma          -          -
00:03:54.623  NVMe                      0000:d8:00.0    8086   0a54   1       nvme             nvme0      nvme0n1
00:03:54.623    13:29:54  -- spdk/autotest.sh@117 -- # uname -s
00:03:54.624   13:29:54  -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]]
00:03:54.624   13:29:54  -- spdk/autotest.sh@119 -- # nvme_namespace_revert
00:03:54.624   13:29:54  -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh
00:03:57.914  0000:00:04.7 (8086 2021): ioatdma -> vfio-pci
00:03:57.914  0000:00:04.6 (8086 2021): ioatdma -> vfio-pci
00:03:57.914  0000:00:04.5 (8086 2021): ioatdma -> vfio-pci
00:03:57.914  0000:00:04.4 (8086 2021): ioatdma -> vfio-pci
00:03:57.914  0000:00:04.3 (8086 2021): ioatdma -> vfio-pci
00:03:57.914  0000:00:04.2 (8086 2021): ioatdma -> vfio-pci
00:03:57.914  0000:00:04.1 (8086 2021): ioatdma -> vfio-pci
00:03:57.914  0000:00:04.0 (8086 2021): ioatdma -> vfio-pci
00:03:57.914  0000:80:04.7 (8086 2021): ioatdma -> vfio-pci
00:03:57.914  0000:80:04.6 (8086 2021): ioatdma -> vfio-pci
00:03:57.914  0000:80:04.5 (8086 2021): ioatdma -> vfio-pci
00:03:57.914  0000:80:04.4 (8086 2021): ioatdma -> vfio-pci
00:03:57.914  0000:80:04.3 (8086 2021): ioatdma -> vfio-pci
00:03:57.914  0000:80:04.2 (8086 2021): ioatdma -> vfio-pci
00:03:57.914  0000:80:04.1 (8086 2021): ioatdma -> vfio-pci
00:03:57.914  0000:80:04.0 (8086 2021): ioatdma -> vfio-pci
00:03:59.821  0000:d8:00.0 (8086 0a54): nvme -> vfio-pci
00:03:59.821   13:29:59  -- common/autotest_common.sh@1517 -- # sleep 1
00:04:00.758   13:30:00  -- common/autotest_common.sh@1518 -- # bdfs=()
00:04:00.758   13:30:00  -- common/autotest_common.sh@1518 -- # local bdfs
00:04:00.758   13:30:00  -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs))
00:04:00.758    13:30:00  -- common/autotest_common.sh@1520 -- # get_nvme_bdfs
00:04:00.758    13:30:00  -- common/autotest_common.sh@1498 -- # bdfs=()
00:04:00.758    13:30:00  -- common/autotest_common.sh@1498 -- # local bdfs
00:04:00.758    13:30:00  -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:04:00.758     13:30:00  -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh
00:04:00.758     13:30:00  -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:04:01.017    13:30:00  -- common/autotest_common.sh@1500 -- # (( 1 == 0 ))
00:04:01.017    13:30:00  -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0
00:04:01.017   13:30:00  -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset
00:04:04.307  Waiting for block devices as requested
00:04:04.307  0000:00:04.7 (8086 2021): vfio-pci -> ioatdma
00:04:04.307  0000:00:04.6 (8086 2021): vfio-pci -> ioatdma
00:04:04.307  0000:00:04.5 (8086 2021): vfio-pci -> ioatdma
00:04:04.307  0000:00:04.4 (8086 2021): vfio-pci -> ioatdma
00:04:04.307  0000:00:04.3 (8086 2021): vfio-pci -> ioatdma
00:04:04.307  0000:00:04.2 (8086 2021): vfio-pci -> ioatdma
00:04:04.566  0000:00:04.1 (8086 2021): vfio-pci -> ioatdma
00:04:04.566  0000:00:04.0 (8086 2021): vfio-pci -> ioatdma
00:04:04.566  0000:80:04.7 (8086 2021): vfio-pci -> ioatdma
00:04:04.825  0000:80:04.6 (8086 2021): vfio-pci -> ioatdma
00:04:04.825  0000:80:04.5 (8086 2021): vfio-pci -> ioatdma
00:04:04.825  0000:80:04.4 (8086 2021): vfio-pci -> ioatdma
00:04:05.084  0000:80:04.3 (8086 2021): vfio-pci -> ioatdma
00:04:05.084  0000:80:04.2 (8086 2021): vfio-pci -> ioatdma
00:04:05.084  0000:80:04.1 (8086 2021): vfio-pci -> ioatdma
00:04:05.343  0000:80:04.0 (8086 2021): vfio-pci -> ioatdma
00:04:05.343  0000:d8:00.0 (8086 0a54): vfio-pci -> nvme
00:04:05.602   13:30:05  -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}"
00:04:05.602    13:30:05  -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0
00:04:05.602     13:30:05  -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0
00:04:05.602     13:30:05  -- common/autotest_common.sh@1487 -- # grep 0000:d8:00.0/nvme/nvme
00:04:05.602    13:30:05  -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0
00:04:05.602    13:30:05  -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]]
00:04:05.602     13:30:05  -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0
00:04:05.602    13:30:05  -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0
00:04:05.602   13:30:05  -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0
00:04:05.602   13:30:05  -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]]
00:04:05.602    13:30:05  -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0
00:04:05.602    13:30:05  -- common/autotest_common.sh@1531 -- # grep oacs
00:04:05.602    13:30:05  -- common/autotest_common.sh@1531 -- # cut -d: -f2
00:04:05.602   13:30:05  -- common/autotest_common.sh@1531 -- # oacs=' 0xe'
00:04:05.602   13:30:05  -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8
00:04:05.602   13:30:05  -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]]
00:04:05.602    13:30:05  -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0
00:04:05.602    13:30:05  -- common/autotest_common.sh@1540 -- # grep unvmcap
00:04:05.602    13:30:05  -- common/autotest_common.sh@1540 -- # cut -d: -f2
00:04:05.602   13:30:05  -- common/autotest_common.sh@1540 -- # unvmcap=' 0'
00:04:05.602   13:30:05  -- common/autotest_common.sh@1541 -- # [[  0 -eq 0 ]]
00:04:05.602   13:30:05  -- common/autotest_common.sh@1543 -- # continue
00:04:05.602   13:30:05  -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup
00:04:05.602   13:30:05  -- common/autotest_common.sh@732 -- # xtrace_disable
00:04:05.602   13:30:05  -- common/autotest_common.sh@10 -- # set +x
00:04:05.602   13:30:05  -- spdk/autotest.sh@125 -- # timing_enter afterboot
00:04:05.602   13:30:05  -- common/autotest_common.sh@726 -- # xtrace_disable
00:04:05.602   13:30:05  -- common/autotest_common.sh@10 -- # set +x
00:04:05.602   13:30:05  -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh
00:04:08.138  0000:00:04.7 (8086 2021): ioatdma -> vfio-pci
00:04:08.138  0000:00:04.6 (8086 2021): ioatdma -> vfio-pci
00:04:08.138  0000:00:04.5 (8086 2021): ioatdma -> vfio-pci
00:04:08.398  0000:00:04.4 (8086 2021): ioatdma -> vfio-pci
00:04:08.398  0000:00:04.3 (8086 2021): ioatdma -> vfio-pci
00:04:08.398  0000:00:04.2 (8086 2021): ioatdma -> vfio-pci
00:04:08.398  0000:00:04.1 (8086 2021): ioatdma -> vfio-pci
00:04:08.398  0000:00:04.0 (8086 2021): ioatdma -> vfio-pci
00:04:08.398  0000:80:04.7 (8086 2021): ioatdma -> vfio-pci
00:04:08.398  0000:80:04.6 (8086 2021): ioatdma -> vfio-pci
00:04:08.398  0000:80:04.5 (8086 2021): ioatdma -> vfio-pci
00:04:08.398  0000:80:04.4 (8086 2021): ioatdma -> vfio-pci
00:04:08.398  0000:80:04.3 (8086 2021): ioatdma -> vfio-pci
00:04:08.398  0000:80:04.2 (8086 2021): ioatdma -> vfio-pci
00:04:08.398  0000:80:04.1 (8086 2021): ioatdma -> vfio-pci
00:04:08.398  0000:80:04.0 (8086 2021): ioatdma -> vfio-pci
00:04:10.306  0000:d8:00.0 (8086 0a54): nvme -> vfio-pci
00:04:10.566   13:30:10  -- spdk/autotest.sh@127 -- # timing_exit afterboot
00:04:10.566   13:30:10  -- common/autotest_common.sh@732 -- # xtrace_disable
00:04:10.566   13:30:10  -- common/autotest_common.sh@10 -- # set +x
00:04:10.566   13:30:10  -- spdk/autotest.sh@131 -- # opal_revert_cleanup
00:04:10.566   13:30:10  -- common/autotest_common.sh@1578 -- # mapfile -t bdfs
00:04:10.566    13:30:10  -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54
00:04:10.566    13:30:10  -- common/autotest_common.sh@1563 -- # bdfs=()
00:04:10.566    13:30:10  -- common/autotest_common.sh@1563 -- # _bdfs=()
00:04:10.566    13:30:10  -- common/autotest_common.sh@1563 -- # local bdfs _bdfs
00:04:10.566    13:30:10  -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs))
00:04:10.566     13:30:10  -- common/autotest_common.sh@1564 -- # get_nvme_bdfs
00:04:10.566     13:30:10  -- common/autotest_common.sh@1498 -- # bdfs=()
00:04:10.566     13:30:10  -- common/autotest_common.sh@1498 -- # local bdfs
00:04:10.566     13:30:10  -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:04:10.566      13:30:10  -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh
00:04:10.566      13:30:10  -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:04:10.566     13:30:10  -- common/autotest_common.sh@1500 -- # (( 1 == 0 ))
00:04:10.566     13:30:10  -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0
00:04:10.566    13:30:10  -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}"
00:04:10.566     13:30:10  -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device
00:04:10.566    13:30:10  -- common/autotest_common.sh@1566 -- # device=0x0a54
00:04:10.566    13:30:10  -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]]
00:04:10.566    13:30:10  -- common/autotest_common.sh@1568 -- # bdfs+=($bdf)
00:04:10.566    13:30:10  -- common/autotest_common.sh@1572 -- # (( 1 > 0 ))
00:04:10.566    13:30:10  -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:d8:00.0
00:04:10.566   13:30:10  -- common/autotest_common.sh@1579 -- # [[ -z 0000:d8:00.0 ]]
00:04:10.566   13:30:10  -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=3107135
00:04:10.566   13:30:10  -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt
00:04:10.566   13:30:10  -- common/autotest_common.sh@1585 -- # waitforlisten 3107135
00:04:10.566   13:30:10  -- common/autotest_common.sh@835 -- # '[' -z 3107135 ']'
00:04:10.566   13:30:10  -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:04:10.566   13:30:10  -- common/autotest_common.sh@840 -- # local max_retries=100
00:04:10.566   13:30:10  -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:04:10.566  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:04:10.566   13:30:10  -- common/autotest_common.sh@844 -- # xtrace_disable
00:04:10.566   13:30:10  -- common/autotest_common.sh@10 -- # set +x
00:04:10.825  [2024-12-14 13:30:10.392242] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:04:10.825  [2024-12-14 13:30:10.392330] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3107135 ]
00:04:10.825  [2024-12-14 13:30:10.524951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:04:11.085  [2024-12-14 13:30:10.624494] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:04:11.654   13:30:11  -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:04:11.654   13:30:11  -- common/autotest_common.sh@868 -- # return 0
00:04:11.654   13:30:11  -- common/autotest_common.sh@1587 -- # bdf_id=0
00:04:11.654   13:30:11  -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}"
00:04:11.654   13:30:11  -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0
00:04:14.945  nvme0n1
00:04:14.945   13:30:14  -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test
00:04:14.945  [2024-12-14 13:30:14.595905] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal
00:04:14.945  request:
00:04:14.945  {
00:04:14.945    "nvme_ctrlr_name": "nvme0",
00:04:14.945    "password": "test",
00:04:14.945    "method": "bdev_nvme_opal_revert",
00:04:14.945    "req_id": 1
00:04:14.945  }
00:04:14.945  Got JSON-RPC error response
00:04:14.945  response:
00:04:14.945  {
00:04:14.945    "code": -32602,
00:04:14.945    "message": "Invalid parameters"
00:04:14.945  }
00:04:14.945   13:30:14  -- common/autotest_common.sh@1591 -- # true
00:04:14.945   13:30:14  -- common/autotest_common.sh@1592 -- # (( ++bdf_id ))
00:04:14.945   13:30:14  -- common/autotest_common.sh@1595 -- # killprocess 3107135
00:04:14.945   13:30:14  -- common/autotest_common.sh@954 -- # '[' -z 3107135 ']'
00:04:14.945   13:30:14  -- common/autotest_common.sh@958 -- # kill -0 3107135
00:04:14.945    13:30:14  -- common/autotest_common.sh@959 -- # uname
00:04:14.945   13:30:14  -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:04:14.945    13:30:14  -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3107135
00:04:15.205   13:30:14  -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:04:15.205   13:30:14  -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:04:15.205   13:30:14  -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3107135'
00:04:15.205  killing process with pid 3107135
00:04:15.205   13:30:14  -- common/autotest_common.sh@973 -- # kill 3107135
00:04:15.205   13:30:14  -- common/autotest_common.sh@978 -- # wait 3107135
00:04:19.404   13:30:19  -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']'
00:04:19.404   13:30:19  -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']'
00:04:19.404   13:30:19  -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]]
00:04:19.404   13:30:19  -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]]
00:04:19.404   13:30:19  -- spdk/autotest.sh@149 -- # timing_enter lib
00:04:19.404   13:30:19  -- common/autotest_common.sh@726 -- # xtrace_disable
00:04:19.404   13:30:19  -- common/autotest_common.sh@10 -- # set +x
00:04:19.404   13:30:19  -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]]
00:04:19.404   13:30:19  -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh
00:04:19.404   13:30:19  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:19.404   13:30:19  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:19.404   13:30:19  -- common/autotest_common.sh@10 -- # set +x
00:04:19.404  ************************************
00:04:19.404  START TEST env
00:04:19.404  ************************************
00:04:19.404   13:30:19 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh
00:04:19.741  * Looking for test storage...
00:04:19.741  * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env
00:04:19.741    13:30:19 env -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:04:19.741     13:30:19 env -- common/autotest_common.sh@1711 -- # lcov --version
00:04:19.741     13:30:19 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:04:19.741    13:30:19 env -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:04:19.741    13:30:19 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:04:19.741    13:30:19 env -- scripts/common.sh@333 -- # local ver1 ver1_l
00:04:19.741    13:30:19 env -- scripts/common.sh@334 -- # local ver2 ver2_l
00:04:19.741    13:30:19 env -- scripts/common.sh@336 -- # IFS=.-:
00:04:19.741    13:30:19 env -- scripts/common.sh@336 -- # read -ra ver1
00:04:19.741    13:30:19 env -- scripts/common.sh@337 -- # IFS=.-:
00:04:19.741    13:30:19 env -- scripts/common.sh@337 -- # read -ra ver2
00:04:19.741    13:30:19 env -- scripts/common.sh@338 -- # local 'op=<'
00:04:19.741    13:30:19 env -- scripts/common.sh@340 -- # ver1_l=2
00:04:19.741    13:30:19 env -- scripts/common.sh@341 -- # ver2_l=1
00:04:19.741    13:30:19 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:04:19.741    13:30:19 env -- scripts/common.sh@344 -- # case "$op" in
00:04:19.741    13:30:19 env -- scripts/common.sh@345 -- # : 1
00:04:19.741    13:30:19 env -- scripts/common.sh@364 -- # (( v = 0 ))
00:04:19.741    13:30:19 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:19.741     13:30:19 env -- scripts/common.sh@365 -- # decimal 1
00:04:19.741     13:30:19 env -- scripts/common.sh@353 -- # local d=1
00:04:19.741     13:30:19 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:19.741     13:30:19 env -- scripts/common.sh@355 -- # echo 1
00:04:19.741    13:30:19 env -- scripts/common.sh@365 -- # ver1[v]=1
00:04:19.741     13:30:19 env -- scripts/common.sh@366 -- # decimal 2
00:04:19.741     13:30:19 env -- scripts/common.sh@353 -- # local d=2
00:04:19.741     13:30:19 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:19.741     13:30:19 env -- scripts/common.sh@355 -- # echo 2
00:04:19.741    13:30:19 env -- scripts/common.sh@366 -- # ver2[v]=2
00:04:19.741    13:30:19 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:04:19.741    13:30:19 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:04:19.741    13:30:19 env -- scripts/common.sh@368 -- # return 0
00:04:19.741    13:30:19 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:19.741    13:30:19 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:04:19.741  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:19.741  		--rc genhtml_branch_coverage=1
00:04:19.741  		--rc genhtml_function_coverage=1
00:04:19.741  		--rc genhtml_legend=1
00:04:19.741  		--rc geninfo_all_blocks=1
00:04:19.741  		--rc geninfo_unexecuted_blocks=1
00:04:19.741  		
00:04:19.741  		'
00:04:19.741    13:30:19 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:04:19.741  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:19.741  		--rc genhtml_branch_coverage=1
00:04:19.741  		--rc genhtml_function_coverage=1
00:04:19.741  		--rc genhtml_legend=1
00:04:19.741  		--rc geninfo_all_blocks=1
00:04:19.741  		--rc geninfo_unexecuted_blocks=1
00:04:19.741  		
00:04:19.741  		'
00:04:19.741    13:30:19 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:04:19.741  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:19.741  		--rc genhtml_branch_coverage=1
00:04:19.741  		--rc genhtml_function_coverage=1
00:04:19.741  		--rc genhtml_legend=1
00:04:19.741  		--rc geninfo_all_blocks=1
00:04:19.741  		--rc geninfo_unexecuted_blocks=1
00:04:19.741  		
00:04:19.741  		'
00:04:19.741    13:30:19 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:04:19.741  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:19.741  		--rc genhtml_branch_coverage=1
00:04:19.741  		--rc genhtml_function_coverage=1
00:04:19.741  		--rc genhtml_legend=1
00:04:19.741  		--rc geninfo_all_blocks=1
00:04:19.741  		--rc geninfo_unexecuted_blocks=1
00:04:19.741  		
00:04:19.741  		'
00:04:19.741   13:30:19 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut
00:04:19.741   13:30:19 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:19.741   13:30:19 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:19.741   13:30:19 env -- common/autotest_common.sh@10 -- # set +x
00:04:19.741  ************************************
00:04:19.741  START TEST env_memory
00:04:19.741  ************************************
00:04:19.741   13:30:19 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut
00:04:19.741  
00:04:19.741  
00:04:19.741       CUnit - A unit testing framework for C - Version 2.1-3
00:04:19.741       http://cunit.sourceforge.net/
00:04:19.741  
00:04:19.741  
00:04:19.741  Suite: memory
00:04:19.741    Test: alloc and free memory map ...[2024-12-14 13:30:19.368512] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed
00:04:19.741  passed
00:04:19.741    Test: mem map translation ...[2024-12-14 13:30:19.406432] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234
00:04:19.741  [2024-12-14 13:30:19.406459] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152
00:04:19.741  [2024-12-14 13:30:19.406517] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656
00:04:19.742  [2024-12-14 13:30:19.406536] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map
00:04:19.742  passed
00:04:19.742    Test: mem map registration ...[2024-12-14 13:30:19.466533] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234
00:04:19.742  [2024-12-14 13:30:19.466562] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152
00:04:20.020  passed
00:04:20.020    Test: mem map adjacent registrations ...passed
00:04:20.020  
00:04:20.020  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:04:20.020                suites      1      1    n/a      0        0
00:04:20.020                 tests      4      4      4      0        0
00:04:20.020               asserts    152    152    152      0      n/a
00:04:20.020  
00:04:20.020  Elapsed time =    0.213 seconds
00:04:20.020  
00:04:20.020  real	0m0.248s
00:04:20.020  user	0m0.224s
00:04:20.020  sys	0m0.023s
00:04:20.020   13:30:19 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:20.020   13:30:19 env.env_memory -- common/autotest_common.sh@10 -- # set +x
00:04:20.020  ************************************
00:04:20.020  END TEST env_memory
00:04:20.020  ************************************
00:04:20.020   13:30:19 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys
00:04:20.020   13:30:19 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:20.020   13:30:19 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:20.020   13:30:19 env -- common/autotest_common.sh@10 -- # set +x
00:04:20.020  ************************************
00:04:20.020  START TEST env_vtophys
00:04:20.020  ************************************
00:04:20.020   13:30:19 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys
00:04:20.020  EAL: lib.eal log level changed from notice to debug
00:04:20.020  EAL: Detected lcore 0 as core 0 on socket 0
00:04:20.020  EAL: Detected lcore 1 as core 1 on socket 0
00:04:20.020  EAL: Detected lcore 2 as core 2 on socket 0
00:04:20.020  EAL: Detected lcore 3 as core 3 on socket 0
00:04:20.020  EAL: Detected lcore 4 as core 4 on socket 0
00:04:20.020  EAL: Detected lcore 5 as core 5 on socket 0
00:04:20.020  EAL: Detected lcore 6 as core 6 on socket 0
00:04:20.020  EAL: Detected lcore 7 as core 8 on socket 0
00:04:20.020  EAL: Detected lcore 8 as core 9 on socket 0
00:04:20.020  EAL: Detected lcore 9 as core 10 on socket 0
00:04:20.020  EAL: Detected lcore 10 as core 11 on socket 0
00:04:20.020  EAL: Detected lcore 11 as core 12 on socket 0
00:04:20.020  EAL: Detected lcore 12 as core 13 on socket 0
00:04:20.020  EAL: Detected lcore 13 as core 14 on socket 0
00:04:20.020  EAL: Detected lcore 14 as core 16 on socket 0
00:04:20.020  EAL: Detected lcore 15 as core 17 on socket 0
00:04:20.020  EAL: Detected lcore 16 as core 18 on socket 0
00:04:20.020  EAL: Detected lcore 17 as core 19 on socket 0
00:04:20.020  EAL: Detected lcore 18 as core 20 on socket 0
00:04:20.020  EAL: Detected lcore 19 as core 21 on socket 0
00:04:20.020  EAL: Detected lcore 20 as core 22 on socket 0
00:04:20.020  EAL: Detected lcore 21 as core 24 on socket 0
00:04:20.020  EAL: Detected lcore 22 as core 25 on socket 0
00:04:20.020  EAL: Detected lcore 23 as core 26 on socket 0
00:04:20.020  EAL: Detected lcore 24 as core 27 on socket 0
00:04:20.020  EAL: Detected lcore 25 as core 28 on socket 0
00:04:20.021  EAL: Detected lcore 26 as core 29 on socket 0
00:04:20.021  EAL: Detected lcore 27 as core 30 on socket 0
00:04:20.021  EAL: Detected lcore 28 as core 0 on socket 1
00:04:20.021  EAL: Detected lcore 29 as core 1 on socket 1
00:04:20.021  EAL: Detected lcore 30 as core 2 on socket 1
00:04:20.021  EAL: Detected lcore 31 as core 3 on socket 1
00:04:20.021  EAL: Detected lcore 32 as core 4 on socket 1
00:04:20.021  EAL: Detected lcore 33 as core 5 on socket 1
00:04:20.021  EAL: Detected lcore 34 as core 6 on socket 1
00:04:20.021  EAL: Detected lcore 35 as core 8 on socket 1
00:04:20.021  EAL: Detected lcore 36 as core 9 on socket 1
00:04:20.021  EAL: Detected lcore 37 as core 10 on socket 1
00:04:20.021  EAL: Detected lcore 38 as core 11 on socket 1
00:04:20.021  EAL: Detected lcore 39 as core 12 on socket 1
00:04:20.021  EAL: Detected lcore 40 as core 13 on socket 1
00:04:20.021  EAL: Detected lcore 41 as core 14 on socket 1
00:04:20.021  EAL: Detected lcore 42 as core 16 on socket 1
00:04:20.021  EAL: Detected lcore 43 as core 17 on socket 1
00:04:20.021  EAL: Detected lcore 44 as core 18 on socket 1
00:04:20.021  EAL: Detected lcore 45 as core 19 on socket 1
00:04:20.021  EAL: Detected lcore 46 as core 20 on socket 1
00:04:20.021  EAL: Detected lcore 47 as core 21 on socket 1
00:04:20.021  EAL: Detected lcore 48 as core 22 on socket 1
00:04:20.021  EAL: Detected lcore 49 as core 24 on socket 1
00:04:20.021  EAL: Detected lcore 50 as core 25 on socket 1
00:04:20.021  EAL: Detected lcore 51 as core 26 on socket 1
00:04:20.021  EAL: Detected lcore 52 as core 27 on socket 1
00:04:20.021  EAL: Detected lcore 53 as core 28 on socket 1
00:04:20.021  EAL: Detected lcore 54 as core 29 on socket 1
00:04:20.021  EAL: Detected lcore 55 as core 30 on socket 1
00:04:20.021  EAL: Detected lcore 56 as core 0 on socket 0
00:04:20.021  EAL: Detected lcore 57 as core 1 on socket 0
00:04:20.021  EAL: Detected lcore 58 as core 2 on socket 0
00:04:20.021  EAL: Detected lcore 59 as core 3 on socket 0
00:04:20.021  EAL: Detected lcore 60 as core 4 on socket 0
00:04:20.021  EAL: Detected lcore 61 as core 5 on socket 0
00:04:20.021  EAL: Detected lcore 62 as core 6 on socket 0
00:04:20.021  EAL: Detected lcore 63 as core 8 on socket 0
00:04:20.021  EAL: Detected lcore 64 as core 9 on socket 0
00:04:20.021  EAL: Detected lcore 65 as core 10 on socket 0
00:04:20.021  EAL: Detected lcore 66 as core 11 on socket 0
00:04:20.021  EAL: Detected lcore 67 as core 12 on socket 0
00:04:20.021  EAL: Detected lcore 68 as core 13 on socket 0
00:04:20.021  EAL: Detected lcore 69 as core 14 on socket 0
00:04:20.021  EAL: Detected lcore 70 as core 16 on socket 0
00:04:20.021  EAL: Detected lcore 71 as core 17 on socket 0
00:04:20.021  EAL: Detected lcore 72 as core 18 on socket 0
00:04:20.021  EAL: Detected lcore 73 as core 19 on socket 0
00:04:20.021  EAL: Detected lcore 74 as core 20 on socket 0
00:04:20.021  EAL: Detected lcore 75 as core 21 on socket 0
00:04:20.021  EAL: Detected lcore 76 as core 22 on socket 0
00:04:20.021  EAL: Detected lcore 77 as core 24 on socket 0
00:04:20.021  EAL: Detected lcore 78 as core 25 on socket 0
00:04:20.021  EAL: Detected lcore 79 as core 26 on socket 0
00:04:20.021  EAL: Detected lcore 80 as core 27 on socket 0
00:04:20.021  EAL: Detected lcore 81 as core 28 on socket 0
00:04:20.021  EAL: Detected lcore 82 as core 29 on socket 0
00:04:20.021  EAL: Detected lcore 83 as core 30 on socket 0
00:04:20.021  EAL: Detected lcore 84 as core 0 on socket 1
00:04:20.021  EAL: Detected lcore 85 as core 1 on socket 1
00:04:20.021  EAL: Detected lcore 86 as core 2 on socket 1
00:04:20.021  EAL: Detected lcore 87 as core 3 on socket 1
00:04:20.021  EAL: Detected lcore 88 as core 4 on socket 1
00:04:20.021  EAL: Detected lcore 89 as core 5 on socket 1
00:04:20.021  EAL: Detected lcore 90 as core 6 on socket 1
00:04:20.021  EAL: Detected lcore 91 as core 8 on socket 1
00:04:20.021  EAL: Detected lcore 92 as core 9 on socket 1
00:04:20.021  EAL: Detected lcore 93 as core 10 on socket 1
00:04:20.021  EAL: Detected lcore 94 as core 11 on socket 1
00:04:20.021  EAL: Detected lcore 95 as core 12 on socket 1
00:04:20.021  EAL: Detected lcore 96 as core 13 on socket 1
00:04:20.021  EAL: Detected lcore 97 as core 14 on socket 1
00:04:20.021  EAL: Detected lcore 98 as core 16 on socket 1
00:04:20.021  EAL: Detected lcore 99 as core 17 on socket 1
00:04:20.021  EAL: Detected lcore 100 as core 18 on socket 1
00:04:20.021  EAL: Detected lcore 101 as core 19 on socket 1
00:04:20.021  EAL: Detected lcore 102 as core 20 on socket 1
00:04:20.021  EAL: Detected lcore 103 as core 21 on socket 1
00:04:20.021  EAL: Detected lcore 104 as core 22 on socket 1
00:04:20.021  EAL: Detected lcore 105 as core 24 on socket 1
00:04:20.021  EAL: Detected lcore 106 as core 25 on socket 1
00:04:20.021  EAL: Detected lcore 107 as core 26 on socket 1
00:04:20.021  EAL: Detected lcore 108 as core 27 on socket 1
00:04:20.021  EAL: Detected lcore 109 as core 28 on socket 1
00:04:20.021  EAL: Detected lcore 110 as core 29 on socket 1
00:04:20.021  EAL: Detected lcore 111 as core 30 on socket 1
00:04:20.021  EAL: Maximum logical cores by configuration: 128
00:04:20.021  EAL: Detected CPU lcores: 112
00:04:20.021  EAL: Detected NUMA nodes: 2
00:04:20.021  EAL: Checking presence of .so 'librte_eal.so.24.1'
00:04:20.021  EAL: Detected shared linkage of DPDK
00:04:20.021  EAL: No shared files mode enabled, IPC will be disabled
00:04:20.021  EAL: Bus pci wants IOVA as 'DC'
00:04:20.021  EAL: Buses did not request a specific IOVA mode.
00:04:20.021  EAL: IOMMU is available, selecting IOVA as VA mode.
00:04:20.021  EAL: Selected IOVA mode 'VA'
00:04:20.021  EAL: Probing VFIO support...
00:04:20.021  EAL: IOMMU type 1 (Type 1) is supported
00:04:20.021  EAL: IOMMU type 7 (sPAPR) is not supported
00:04:20.021  EAL: IOMMU type 8 (No-IOMMU) is not supported
00:04:20.021  EAL: VFIO support initialized
00:04:20.021  EAL: Ask a virtual area of 0x2e000 bytes
00:04:20.021  EAL: Virtual area found at 0x200000000000 (size = 0x2e000)
00:04:20.021  EAL: Setting up physically contiguous memory...
00:04:20.021  EAL: Setting maximum number of open files to 524288
00:04:20.021  EAL: Detected memory type: socket_id:0 hugepage_sz:2097152
00:04:20.021  EAL: Detected memory type: socket_id:1 hugepage_sz:2097152
00:04:20.021  EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152
00:04:20.021  EAL: Ask a virtual area of 0x61000 bytes
00:04:20.021  EAL: Virtual area found at 0x20000002e000 (size = 0x61000)
00:04:20.021  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:04:20.021  EAL: Ask a virtual area of 0x400000000 bytes
00:04:20.021  EAL: Virtual area found at 0x200000200000 (size = 0x400000000)
00:04:20.021  EAL: VA reserved for memseg list at 0x200000200000, size 400000000
00:04:20.021  EAL: Ask a virtual area of 0x61000 bytes
00:04:20.021  EAL: Virtual area found at 0x200400200000 (size = 0x61000)
00:04:20.021  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:04:20.021  EAL: Ask a virtual area of 0x400000000 bytes
00:04:20.021  EAL: Virtual area found at 0x200400400000 (size = 0x400000000)
00:04:20.021  EAL: VA reserved for memseg list at 0x200400400000, size 400000000
00:04:20.021  EAL: Ask a virtual area of 0x61000 bytes
00:04:20.021  EAL: Virtual area found at 0x200800400000 (size = 0x61000)
00:04:20.021  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:04:20.021  EAL: Ask a virtual area of 0x400000000 bytes
00:04:20.021  EAL: Virtual area found at 0x200800600000 (size = 0x400000000)
00:04:20.021  EAL: VA reserved for memseg list at 0x200800600000, size 400000000
00:04:20.021  EAL: Ask a virtual area of 0x61000 bytes
00:04:20.021  EAL: Virtual area found at 0x200c00600000 (size = 0x61000)
00:04:20.021  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:04:20.021  EAL: Ask a virtual area of 0x400000000 bytes
00:04:20.021  EAL: Virtual area found at 0x200c00800000 (size = 0x400000000)
00:04:20.021  EAL: VA reserved for memseg list at 0x200c00800000, size 400000000
00:04:20.021  EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152
00:04:20.021  EAL: Ask a virtual area of 0x61000 bytes
00:04:20.021  EAL: Virtual area found at 0x201000800000 (size = 0x61000)
00:04:20.021  EAL: Memseg list allocated at socket 1, page size 0x800kB
00:04:20.021  EAL: Ask a virtual area of 0x400000000 bytes
00:04:20.021  EAL: Virtual area found at 0x201000a00000 (size = 0x400000000)
00:04:20.021  EAL: VA reserved for memseg list at 0x201000a00000, size 400000000
00:04:20.021  EAL: Ask a virtual area of 0x61000 bytes
00:04:20.021  EAL: Virtual area found at 0x201400a00000 (size = 0x61000)
00:04:20.021  EAL: Memseg list allocated at socket 1, page size 0x800kB
00:04:20.021  EAL: Ask a virtual area of 0x400000000 bytes
00:04:20.021  EAL: Virtual area found at 0x201400c00000 (size = 0x400000000)
00:04:20.021  EAL: VA reserved for memseg list at 0x201400c00000, size 400000000
00:04:20.021  EAL: Ask a virtual area of 0x61000 bytes
00:04:20.021  EAL: Virtual area found at 0x201800c00000 (size = 0x61000)
00:04:20.021  EAL: Memseg list allocated at socket 1, page size 0x800kB
00:04:20.021  EAL: Ask a virtual area of 0x400000000 bytes
00:04:20.021  EAL: Virtual area found at 0x201800e00000 (size = 0x400000000)
00:04:20.021  EAL: VA reserved for memseg list at 0x201800e00000, size 400000000
00:04:20.021  EAL: Ask a virtual area of 0x61000 bytes
00:04:20.021  EAL: Virtual area found at 0x201c00e00000 (size = 0x61000)
00:04:20.281  EAL: Memseg list allocated at socket 1, page size 0x800kB
00:04:20.281  EAL: Ask a virtual area of 0x400000000 bytes
00:04:20.281  EAL: Virtual area found at 0x201c01000000 (size = 0x400000000)
00:04:20.281  EAL: VA reserved for memseg list at 0x201c01000000, size 400000000
00:04:20.281  EAL: Hugepages will be freed exactly as allocated.
00:04:20.281  EAL: No shared files mode enabled, IPC is disabled
00:04:20.281  EAL: No shared files mode enabled, IPC is disabled
00:04:20.281  EAL: TSC frequency is ~2500000 KHz
00:04:20.281  EAL: Main lcore 0 is ready (tid=7eff497dca40;cpuset=[0])
00:04:20.281  EAL: Trying to obtain current memory policy.
00:04:20.281  EAL: Setting policy MPOL_PREFERRED for socket 0
00:04:20.281  EAL: Restoring previous memory policy: 0
00:04:20.281  EAL: request: mp_malloc_sync
00:04:20.281  EAL: No shared files mode enabled, IPC is disabled
00:04:20.281  EAL: Heap on socket 0 was expanded by 2MB
00:04:20.281  EAL: No shared files mode enabled, IPC is disabled
00:04:20.281  EAL: No PCI address specified using 'addr=<id>' in: bus=pci
00:04:20.281  EAL: Mem event callback 'spdk:(nil)' registered
00:04:20.281  
00:04:20.281  
00:04:20.281       CUnit - A unit testing framework for C - Version 2.1-3
00:04:20.281       http://cunit.sourceforge.net/
00:04:20.281  
00:04:20.281  
00:04:20.281  Suite: components_suite
00:04:20.540    Test: vtophys_malloc_test ...passed
00:04:20.540    Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy.
00:04:20.540  EAL: Setting policy MPOL_PREFERRED for socket 0
00:04:20.540  EAL: Restoring previous memory policy: 4
00:04:20.540  EAL: Calling mem event callback 'spdk:(nil)'
00:04:20.540  EAL: request: mp_malloc_sync
00:04:20.540  EAL: No shared files mode enabled, IPC is disabled
00:04:20.540  EAL: Heap on socket 0 was expanded by 4MB
00:04:20.540  EAL: Calling mem event callback 'spdk:(nil)'
00:04:20.540  EAL: request: mp_malloc_sync
00:04:20.540  EAL: No shared files mode enabled, IPC is disabled
00:04:20.540  EAL: Heap on socket 0 was shrunk by 4MB
00:04:20.540  EAL: Trying to obtain current memory policy.
00:04:20.540  EAL: Setting policy MPOL_PREFERRED for socket 0
00:04:20.540  EAL: Restoring previous memory policy: 4
00:04:20.540  EAL: Calling mem event callback 'spdk:(nil)'
00:04:20.540  EAL: request: mp_malloc_sync
00:04:20.540  EAL: No shared files mode enabled, IPC is disabled
00:04:20.540  EAL: Heap on socket 0 was expanded by 6MB
00:04:20.540  EAL: Calling mem event callback 'spdk:(nil)'
00:04:20.540  EAL: request: mp_malloc_sync
00:04:20.540  EAL: No shared files mode enabled, IPC is disabled
00:04:20.540  EAL: Heap on socket 0 was shrunk by 6MB
00:04:20.540  EAL: Trying to obtain current memory policy.
00:04:20.540  EAL: Setting policy MPOL_PREFERRED for socket 0
00:04:20.540  EAL: Restoring previous memory policy: 4
00:04:20.540  EAL: Calling mem event callback 'spdk:(nil)'
00:04:20.540  EAL: request: mp_malloc_sync
00:04:20.540  EAL: No shared files mode enabled, IPC is disabled
00:04:20.540  EAL: Heap on socket 0 was expanded by 10MB
00:04:20.540  EAL: Calling mem event callback 'spdk:(nil)'
00:04:20.540  EAL: request: mp_malloc_sync
00:04:20.540  EAL: No shared files mode enabled, IPC is disabled
00:04:20.540  EAL: Heap on socket 0 was shrunk by 10MB
00:04:20.540  EAL: Trying to obtain current memory policy.
00:04:20.540  EAL: Setting policy MPOL_PREFERRED for socket 0
00:04:20.540  EAL: Restoring previous memory policy: 4
00:04:20.540  EAL: Calling mem event callback 'spdk:(nil)'
00:04:20.540  EAL: request: mp_malloc_sync
00:04:20.540  EAL: No shared files mode enabled, IPC is disabled
00:04:20.540  EAL: Heap on socket 0 was expanded by 18MB
00:04:20.540  EAL: Calling mem event callback 'spdk:(nil)'
00:04:20.540  EAL: request: mp_malloc_sync
00:04:20.540  EAL: No shared files mode enabled, IPC is disabled
00:04:20.540  EAL: Heap on socket 0 was shrunk by 18MB
00:04:20.799  EAL: Trying to obtain current memory policy.
00:04:20.799  EAL: Setting policy MPOL_PREFERRED for socket 0
00:04:20.799  EAL: Restoring previous memory policy: 4
00:04:20.799  EAL: Calling mem event callback 'spdk:(nil)'
00:04:20.799  EAL: request: mp_malloc_sync
00:04:20.799  EAL: No shared files mode enabled, IPC is disabled
00:04:20.799  EAL: Heap on socket 0 was expanded by 34MB
00:04:20.799  EAL: Calling mem event callback 'spdk:(nil)'
00:04:20.799  EAL: request: mp_malloc_sync
00:04:20.799  EAL: No shared files mode enabled, IPC is disabled
00:04:20.799  EAL: Heap on socket 0 was shrunk by 34MB
00:04:20.799  EAL: Trying to obtain current memory policy.
00:04:20.799  EAL: Setting policy MPOL_PREFERRED for socket 0
00:04:20.799  EAL: Restoring previous memory policy: 4
00:04:20.799  EAL: Calling mem event callback 'spdk:(nil)'
00:04:20.799  EAL: request: mp_malloc_sync
00:04:20.799  EAL: No shared files mode enabled, IPC is disabled
00:04:20.799  EAL: Heap on socket 0 was expanded by 66MB
00:04:20.799  EAL: Calling mem event callback 'spdk:(nil)'
00:04:20.799  EAL: request: mp_malloc_sync
00:04:20.799  EAL: No shared files mode enabled, IPC is disabled
00:04:20.799  EAL: Heap on socket 0 was shrunk by 66MB
00:04:21.059  EAL: Trying to obtain current memory policy.
00:04:21.059  EAL: Setting policy MPOL_PREFERRED for socket 0
00:04:21.059  EAL: Restoring previous memory policy: 4
00:04:21.059  EAL: Calling mem event callback 'spdk:(nil)'
00:04:21.059  EAL: request: mp_malloc_sync
00:04:21.059  EAL: No shared files mode enabled, IPC is disabled
00:04:21.059  EAL: Heap on socket 0 was expanded by 130MB
00:04:21.318  EAL: Calling mem event callback 'spdk:(nil)'
00:04:21.318  EAL: request: mp_malloc_sync
00:04:21.318  EAL: No shared files mode enabled, IPC is disabled
00:04:21.318  EAL: Heap on socket 0 was shrunk by 130MB
00:04:21.578  EAL: Trying to obtain current memory policy.
00:04:21.578  EAL: Setting policy MPOL_PREFERRED for socket 0
00:04:21.578  EAL: Restoring previous memory policy: 4
00:04:21.578  EAL: Calling mem event callback 'spdk:(nil)'
00:04:21.578  EAL: request: mp_malloc_sync
00:04:21.578  EAL: No shared files mode enabled, IPC is disabled
00:04:21.578  EAL: Heap on socket 0 was expanded by 258MB
00:04:21.837  EAL: Calling mem event callback 'spdk:(nil)'
00:04:21.837  EAL: request: mp_malloc_sync
00:04:21.837  EAL: No shared files mode enabled, IPC is disabled
00:04:21.837  EAL: Heap on socket 0 was shrunk by 258MB
00:04:22.406  EAL: Trying to obtain current memory policy.
00:04:22.406  EAL: Setting policy MPOL_PREFERRED for socket 0
00:04:22.406  EAL: Restoring previous memory policy: 4
00:04:22.406  EAL: Calling mem event callback 'spdk:(nil)'
00:04:22.406  EAL: request: mp_malloc_sync
00:04:22.406  EAL: No shared files mode enabled, IPC is disabled
00:04:22.406  EAL: Heap on socket 0 was expanded by 514MB
00:04:23.344  EAL: Calling mem event callback 'spdk:(nil)'
00:04:23.344  EAL: request: mp_malloc_sync
00:04:23.344  EAL: No shared files mode enabled, IPC is disabled
00:04:23.344  EAL: Heap on socket 0 was shrunk by 514MB
00:04:24.281  EAL: Trying to obtain current memory policy.
00:04:24.281  EAL: Setting policy MPOL_PREFERRED for socket 0
00:04:24.281  EAL: Restoring previous memory policy: 4
00:04:24.281  EAL: Calling mem event callback 'spdk:(nil)'
00:04:24.281  EAL: request: mp_malloc_sync
00:04:24.281  EAL: No shared files mode enabled, IPC is disabled
00:04:24.281  EAL: Heap on socket 0 was expanded by 1026MB
00:04:25.661  EAL: Calling mem event callback 'spdk:(nil)'
00:04:25.920  EAL: request: mp_malloc_sync
00:04:25.920  EAL: No shared files mode enabled, IPC is disabled
00:04:25.920  EAL: Heap on socket 0 was shrunk by 1026MB
00:04:27.826  passed
00:04:27.826  
00:04:27.826  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:04:27.826                suites      1      1    n/a      0        0
00:04:27.826                 tests      2      2      2      0        0
00:04:27.826               asserts    497    497    497      0      n/a
00:04:27.826  
00:04:27.826  Elapsed time =    7.246 seconds
00:04:27.826  EAL: Calling mem event callback 'spdk:(nil)'
00:04:27.826  EAL: request: mp_malloc_sync
00:04:27.826  EAL: No shared files mode enabled, IPC is disabled
00:04:27.826  EAL: Heap on socket 0 was shrunk by 2MB
00:04:27.826  EAL: No shared files mode enabled, IPC is disabled
00:04:27.826  EAL: No shared files mode enabled, IPC is disabled
00:04:27.826  EAL: No shared files mode enabled, IPC is disabled
00:04:27.826  
00:04:27.826  real	0m7.508s
00:04:27.826  user	0m6.623s
00:04:27.826  sys	0m0.827s
00:04:27.826   13:30:27 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:27.826   13:30:27 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x
00:04:27.826  ************************************
00:04:27.826  END TEST env_vtophys
00:04:27.826  ************************************
00:04:27.826   13:30:27 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut
00:04:27.826   13:30:27 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:27.826   13:30:27 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:27.826   13:30:27 env -- common/autotest_common.sh@10 -- # set +x
00:04:27.826  ************************************
00:04:27.826  START TEST env_pci
00:04:27.826  ************************************
00:04:27.826   13:30:27 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut
00:04:27.826  
00:04:27.826  
00:04:27.826       CUnit - A unit testing framework for C - Version 2.1-3
00:04:27.826       http://cunit.sourceforge.net/
00:04:27.826  
00:04:27.826  
00:04:27.826  Suite: pci
00:04:27.826    Test: pci_hook ...[2024-12-14 13:30:27.277444] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3110258 has claimed it
00:04:27.826  EAL: Cannot find device (10000:00:01.0)
00:04:27.826  EAL: Failed to attach device on primary process
00:04:27.826  passed
00:04:27.826  
00:04:27.826  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:04:27.826                suites      1      1    n/a      0        0
00:04:27.826                 tests      1      1      1      0        0
00:04:27.826               asserts     25     25     25      0      n/a
00:04:27.826  
00:04:27.826  Elapsed time =    0.054 seconds
00:04:27.826  
00:04:27.826  real	0m0.144s
00:04:27.826  user	0m0.043s
00:04:27.826  sys	0m0.101s
00:04:27.826   13:30:27 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:27.826   13:30:27 env.env_pci -- common/autotest_common.sh@10 -- # set +x
00:04:27.826  ************************************
00:04:27.826  END TEST env_pci
00:04:27.826  ************************************
00:04:27.826   13:30:27 env -- env/env.sh@14 -- # argv='-c 0x1 '
00:04:27.826    13:30:27 env -- env/env.sh@15 -- # uname
00:04:27.826   13:30:27 env -- env/env.sh@15 -- # '[' Linux = Linux ']'
00:04:27.826   13:30:27 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000
00:04:27.826   13:30:27 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000
00:04:27.826   13:30:27 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:04:27.826   13:30:27 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:27.826   13:30:27 env -- common/autotest_common.sh@10 -- # set +x
00:04:27.826  ************************************
00:04:27.826  START TEST env_dpdk_post_init
00:04:27.826  ************************************
00:04:27.826   13:30:27 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000
00:04:27.826  EAL: Detected CPU lcores: 112
00:04:27.826  EAL: Detected NUMA nodes: 2
00:04:27.826  EAL: Detected shared linkage of DPDK
00:04:27.826  EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
00:04:28.086  EAL: Selected IOVA mode 'VA'
00:04:28.086  EAL: VFIO support initialized
00:04:28.086  TELEMETRY: No legacy callbacks, legacy socket not created
00:04:28.086  EAL: Using IOMMU type 1 (Type 1)
00:04:28.086  EAL: Ignore mapping IO port bar(1)
00:04:28.086  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0)
00:04:28.086  EAL: Ignore mapping IO port bar(1)
00:04:28.086  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0)
00:04:28.086  EAL: Ignore mapping IO port bar(1)
00:04:28.086  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0)
00:04:28.086  EAL: Ignore mapping IO port bar(1)
00:04:28.086  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0)
00:04:28.086  EAL: Ignore mapping IO port bar(1)
00:04:28.086  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0)
00:04:28.086  EAL: Ignore mapping IO port bar(1)
00:04:28.086  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0)
00:04:28.086  EAL: Ignore mapping IO port bar(1)
00:04:28.086  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0)
00:04:28.086  EAL: Ignore mapping IO port bar(1)
00:04:28.086  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0)
00:04:28.086  EAL: Ignore mapping IO port bar(1)
00:04:28.086  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1)
00:04:28.346  EAL: Ignore mapping IO port bar(1)
00:04:28.346  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1)
00:04:28.346  EAL: Ignore mapping IO port bar(1)
00:04:28.346  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1)
00:04:28.346  EAL: Ignore mapping IO port bar(1)
00:04:28.346  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1)
00:04:28.346  EAL: Ignore mapping IO port bar(1)
00:04:28.346  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1)
00:04:28.346  EAL: Ignore mapping IO port bar(1)
00:04:28.346  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1)
00:04:28.346  EAL: Ignore mapping IO port bar(1)
00:04:28.346  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1)
00:04:28.346  EAL: Ignore mapping IO port bar(1)
00:04:28.346  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1)
00:04:29.285  EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1)
00:04:33.479  EAL: Releasing PCI mapped resource for 0000:d8:00.0
00:04:33.479  EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001040000
00:04:33.479  Starting DPDK initialization...
00:04:33.479  Starting SPDK post initialization...
00:04:33.479  SPDK NVMe probe
00:04:33.479  Attaching to 0000:d8:00.0
00:04:33.479  Attached to 0000:d8:00.0
00:04:33.479  Cleaning up...
00:04:33.479  
00:04:33.479  real	0m5.506s
00:04:33.479  user	0m3.866s
00:04:33.479  sys	0m0.691s
00:04:33.479   13:30:32 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:33.479   13:30:32 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x
00:04:33.479  ************************************
00:04:33.479  END TEST env_dpdk_post_init
00:04:33.479  ************************************
00:04:33.479    13:30:33 env -- env/env.sh@26 -- # uname
00:04:33.479   13:30:33 env -- env/env.sh@26 -- # '[' Linux = Linux ']'
00:04:33.479   13:30:33 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks
00:04:33.479   13:30:33 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:33.479   13:30:33 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:33.479   13:30:33 env -- common/autotest_common.sh@10 -- # set +x
00:04:33.479  ************************************
00:04:33.479  START TEST env_mem_callbacks
00:04:33.479  ************************************
00:04:33.479   13:30:33 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks
00:04:33.479  EAL: Detected CPU lcores: 112
00:04:33.479  EAL: Detected NUMA nodes: 2
00:04:33.479  EAL: Detected shared linkage of DPDK
00:04:33.479  EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
00:04:33.479  EAL: Selected IOVA mode 'VA'
00:04:33.479  EAL: VFIO support initialized
00:04:33.479  TELEMETRY: No legacy callbacks, legacy socket not created
00:04:33.479  
00:04:33.479  
00:04:33.479       CUnit - A unit testing framework for C - Version 2.1-3
00:04:33.479       http://cunit.sourceforge.net/
00:04:33.479  
00:04:33.479  
00:04:33.479  Suite: memory
00:04:33.479    Test: test ...
00:04:33.479  register 0x200000200000 2097152
00:04:33.479  malloc 3145728
00:04:33.479  register 0x200000400000 4194304
00:04:33.479  buf 0x2000004fffc0 len 3145728 PASSED
00:04:33.479  malloc 64
00:04:33.479  buf 0x2000004ffec0 len 64 PASSED
00:04:33.479  malloc 4194304
00:04:33.479  register 0x200000800000 6291456
00:04:33.479  buf 0x2000009fffc0 len 4194304 PASSED
00:04:33.479  free 0x2000004fffc0 3145728
00:04:33.479  free 0x2000004ffec0 64
00:04:33.479  unregister 0x200000400000 4194304 PASSED
00:04:33.479  free 0x2000009fffc0 4194304
00:04:33.479  unregister 0x200000800000 6291456 PASSED
00:04:33.479  malloc 8388608
00:04:33.479  register 0x200000400000 10485760
00:04:33.479  buf 0x2000005fffc0 len 8388608 PASSED
00:04:33.479  free 0x2000005fffc0 8388608
00:04:33.479  unregister 0x200000400000 10485760 PASSED
00:04:33.739  passed
00:04:33.739  
00:04:33.739  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:04:33.739                suites      1      1    n/a      0        0
00:04:33.739                 tests      1      1      1      0        0
00:04:33.739               asserts     15     15     15      0      n/a
00:04:33.739  
00:04:33.739  Elapsed time =    0.060 seconds
00:04:33.739  
00:04:33.739  real	0m0.180s
00:04:33.739  user	0m0.096s
00:04:33.739  sys	0m0.083s
00:04:33.739   13:30:33 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:33.739   13:30:33 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x
00:04:33.739  ************************************
00:04:33.739  END TEST env_mem_callbacks
00:04:33.739  ************************************
00:04:33.739  
00:04:33.739  real	0m14.203s
00:04:33.739  user	0m11.106s
00:04:33.739  sys	0m2.133s
00:04:33.739   13:30:33 env -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:33.739   13:30:33 env -- common/autotest_common.sh@10 -- # set +x
00:04:33.739  ************************************
00:04:33.739  END TEST env
00:04:33.739  ************************************
00:04:33.739   13:30:33  -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh
00:04:33.739   13:30:33  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:33.739   13:30:33  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:33.739   13:30:33  -- common/autotest_common.sh@10 -- # set +x
00:04:33.739  ************************************
00:04:33.739  START TEST rpc
00:04:33.739  ************************************
00:04:33.739   13:30:33 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh
00:04:33.739  * Looking for test storage...
00:04:33.739  * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc
00:04:33.739    13:30:33 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:04:33.739     13:30:33 rpc -- common/autotest_common.sh@1711 -- # lcov --version
00:04:33.739     13:30:33 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:04:33.999    13:30:33 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:04:33.999    13:30:33 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:04:33.999    13:30:33 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:04:33.999    13:30:33 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:04:33.999    13:30:33 rpc -- scripts/common.sh@336 -- # IFS=.-:
00:04:33.999    13:30:33 rpc -- scripts/common.sh@336 -- # read -ra ver1
00:04:33.999    13:30:33 rpc -- scripts/common.sh@337 -- # IFS=.-:
00:04:33.999    13:30:33 rpc -- scripts/common.sh@337 -- # read -ra ver2
00:04:33.999    13:30:33 rpc -- scripts/common.sh@338 -- # local 'op=<'
00:04:33.999    13:30:33 rpc -- scripts/common.sh@340 -- # ver1_l=2
00:04:33.999    13:30:33 rpc -- scripts/common.sh@341 -- # ver2_l=1
00:04:33.999    13:30:33 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:04:33.999    13:30:33 rpc -- scripts/common.sh@344 -- # case "$op" in
00:04:33.999    13:30:33 rpc -- scripts/common.sh@345 -- # : 1
00:04:33.999    13:30:33 rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:04:33.999    13:30:33 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:33.999     13:30:33 rpc -- scripts/common.sh@365 -- # decimal 1
00:04:33.999     13:30:33 rpc -- scripts/common.sh@353 -- # local d=1
00:04:33.999     13:30:33 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:33.999     13:30:33 rpc -- scripts/common.sh@355 -- # echo 1
00:04:33.999    13:30:33 rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:04:33.999     13:30:33 rpc -- scripts/common.sh@366 -- # decimal 2
00:04:33.999     13:30:33 rpc -- scripts/common.sh@353 -- # local d=2
00:04:33.999     13:30:33 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:33.999     13:30:33 rpc -- scripts/common.sh@355 -- # echo 2
00:04:33.999    13:30:33 rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:04:33.999    13:30:33 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:04:33.999    13:30:33 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:04:33.999    13:30:33 rpc -- scripts/common.sh@368 -- # return 0
00:04:33.999    13:30:33 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:33.999    13:30:33 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:04:33.999  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:33.999  		--rc genhtml_branch_coverage=1
00:04:33.999  		--rc genhtml_function_coverage=1
00:04:33.999  		--rc genhtml_legend=1
00:04:33.999  		--rc geninfo_all_blocks=1
00:04:33.999  		--rc geninfo_unexecuted_blocks=1
00:04:33.999  		
00:04:33.999  		'
00:04:33.999    13:30:33 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:04:33.999  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:33.999  		--rc genhtml_branch_coverage=1
00:04:33.999  		--rc genhtml_function_coverage=1
00:04:33.999  		--rc genhtml_legend=1
00:04:33.999  		--rc geninfo_all_blocks=1
00:04:33.999  		--rc geninfo_unexecuted_blocks=1
00:04:33.999  		
00:04:33.999  		'
00:04:33.999    13:30:33 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:04:33.999  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:33.999  		--rc genhtml_branch_coverage=1
00:04:33.999  		--rc genhtml_function_coverage=1
00:04:33.999  		--rc genhtml_legend=1
00:04:33.999  		--rc geninfo_all_blocks=1
00:04:33.999  		--rc geninfo_unexecuted_blocks=1
00:04:33.999  		
00:04:33.999  		'
00:04:33.999    13:30:33 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:04:33.999  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:33.999  		--rc genhtml_branch_coverage=1
00:04:33.999  		--rc genhtml_function_coverage=1
00:04:33.999  		--rc genhtml_legend=1
00:04:33.999  		--rc geninfo_all_blocks=1
00:04:33.999  		--rc geninfo_unexecuted_blocks=1
00:04:33.999  		
00:04:33.999  		'
00:04:33.999   13:30:33 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3111492
00:04:33.999   13:30:33 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:04:33.999   13:30:33 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3111492
00:04:33.999   13:30:33 rpc -- common/autotest_common.sh@835 -- # '[' -z 3111492 ']'
00:04:33.999   13:30:33 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:04:33.999   13:30:33 rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:04:33.999   13:30:33 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:04:33.999  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:04:33.999   13:30:33 rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:04:33.999   13:30:33 rpc -- common/autotest_common.sh@10 -- # set +x
00:04:33.999   13:30:33 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -e bdev
00:04:33.999  [2024-12-14 13:30:33.643871] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:04:33.999  [2024-12-14 13:30:33.643995] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3111492 ]
00:04:34.259  [2024-12-14 13:30:33.774346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:04:34.259  [2024-12-14 13:30:33.868836] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified.
00:04:34.259  [2024-12-14 13:30:33.868884] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3111492' to capture a snapshot of events at runtime.
00:04:34.259  [2024-12-14 13:30:33.868898] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:04:34.259  [2024-12-14 13:30:33.868908] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:04:34.259  [2024-12-14 13:30:33.868920] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3111492 for offline analysis/debug.
00:04:34.259  [2024-12-14 13:30:33.870182] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:04:35.197   13:30:34 rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:04:35.197   13:30:34 rpc -- common/autotest_common.sh@868 -- # return 0
00:04:35.197   13:30:34 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc
00:04:35.197   13:30:34 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc
00:04:35.197   13:30:34 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd
00:04:35.197   13:30:34 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity
00:04:35.197   13:30:34 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:35.197   13:30:34 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:35.197   13:30:34 rpc -- common/autotest_common.sh@10 -- # set +x
00:04:35.197  ************************************
00:04:35.197  START TEST rpc_integrity
00:04:35.197  ************************************
00:04:35.197   13:30:34 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity
00:04:35.197    13:30:34 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs
00:04:35.197    13:30:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:35.197    13:30:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:35.197    13:30:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:35.197   13:30:34 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]'
00:04:35.197    13:30:34 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length
00:04:35.197   13:30:34 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']'
00:04:35.198    13:30:34 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512
00:04:35.198    13:30:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:35.198    13:30:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:35.198    13:30:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:35.198   13:30:34 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0
00:04:35.198    13:30:34 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs
00:04:35.198    13:30:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:35.198    13:30:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:35.198    13:30:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:35.198   13:30:34 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[
00:04:35.198  {
00:04:35.198  "name": "Malloc0",
00:04:35.198  "aliases": [
00:04:35.198  "150d1bb6-51a7-4577-9583-064403aae85b"
00:04:35.198  ],
00:04:35.198  "product_name": "Malloc disk",
00:04:35.198  "block_size": 512,
00:04:35.198  "num_blocks": 16384,
00:04:35.198  "uuid": "150d1bb6-51a7-4577-9583-064403aae85b",
00:04:35.198  "assigned_rate_limits": {
00:04:35.198  "rw_ios_per_sec": 0,
00:04:35.198  "rw_mbytes_per_sec": 0,
00:04:35.198  "r_mbytes_per_sec": 0,
00:04:35.198  "w_mbytes_per_sec": 0
00:04:35.198  },
00:04:35.198  "claimed": false,
00:04:35.198  "zoned": false,
00:04:35.198  "supported_io_types": {
00:04:35.198  "read": true,
00:04:35.198  "write": true,
00:04:35.198  "unmap": true,
00:04:35.198  "flush": true,
00:04:35.198  "reset": true,
00:04:35.198  "nvme_admin": false,
00:04:35.198  "nvme_io": false,
00:04:35.198  "nvme_io_md": false,
00:04:35.198  "write_zeroes": true,
00:04:35.198  "zcopy": true,
00:04:35.198  "get_zone_info": false,
00:04:35.198  "zone_management": false,
00:04:35.198  "zone_append": false,
00:04:35.198  "compare": false,
00:04:35.198  "compare_and_write": false,
00:04:35.198  "abort": true,
00:04:35.198  "seek_hole": false,
00:04:35.198  "seek_data": false,
00:04:35.198  "copy": true,
00:04:35.198  "nvme_iov_md": false
00:04:35.198  },
00:04:35.198  "memory_domains": [
00:04:35.198  {
00:04:35.198  "dma_device_id": "system",
00:04:35.198  "dma_device_type": 1
00:04:35.198  },
00:04:35.198  {
00:04:35.198  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:04:35.198  "dma_device_type": 2
00:04:35.198  }
00:04:35.198  ],
00:04:35.198  "driver_specific": {}
00:04:35.198  }
00:04:35.198  ]'
00:04:35.198    13:30:34 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length
00:04:35.198   13:30:34 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']'
00:04:35.198   13:30:34 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0
00:04:35.198   13:30:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:35.198   13:30:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:35.198  [2024-12-14 13:30:34.741626] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0
00:04:35.198  [2024-12-14 13:30:34.741672] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:04:35.198  [2024-12-14 13:30:34.741697] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000021680
00:04:35.198  [2024-12-14 13:30:34.741710] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed
00:04:35.198  [2024-12-14 13:30:34.743688] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:04:35.198  [2024-12-14 13:30:34.743717] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0
00:04:35.198  Passthru0
00:04:35.198   13:30:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:35.198    13:30:34 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs
00:04:35.198    13:30:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:35.198    13:30:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:35.198    13:30:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:35.198   13:30:34 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[
00:04:35.198  {
00:04:35.198  "name": "Malloc0",
00:04:35.198  "aliases": [
00:04:35.198  "150d1bb6-51a7-4577-9583-064403aae85b"
00:04:35.198  ],
00:04:35.198  "product_name": "Malloc disk",
00:04:35.198  "block_size": 512,
00:04:35.198  "num_blocks": 16384,
00:04:35.198  "uuid": "150d1bb6-51a7-4577-9583-064403aae85b",
00:04:35.198  "assigned_rate_limits": {
00:04:35.198  "rw_ios_per_sec": 0,
00:04:35.198  "rw_mbytes_per_sec": 0,
00:04:35.198  "r_mbytes_per_sec": 0,
00:04:35.198  "w_mbytes_per_sec": 0
00:04:35.198  },
00:04:35.198  "claimed": true,
00:04:35.198  "claim_type": "exclusive_write",
00:04:35.198  "zoned": false,
00:04:35.198  "supported_io_types": {
00:04:35.198  "read": true,
00:04:35.198  "write": true,
00:04:35.198  "unmap": true,
00:04:35.198  "flush": true,
00:04:35.198  "reset": true,
00:04:35.198  "nvme_admin": false,
00:04:35.198  "nvme_io": false,
00:04:35.198  "nvme_io_md": false,
00:04:35.198  "write_zeroes": true,
00:04:35.198  "zcopy": true,
00:04:35.198  "get_zone_info": false,
00:04:35.198  "zone_management": false,
00:04:35.198  "zone_append": false,
00:04:35.198  "compare": false,
00:04:35.198  "compare_and_write": false,
00:04:35.198  "abort": true,
00:04:35.198  "seek_hole": false,
00:04:35.198  "seek_data": false,
00:04:35.198  "copy": true,
00:04:35.198  "nvme_iov_md": false
00:04:35.198  },
00:04:35.198  "memory_domains": [
00:04:35.198  {
00:04:35.198  "dma_device_id": "system",
00:04:35.198  "dma_device_type": 1
00:04:35.198  },
00:04:35.198  {
00:04:35.198  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:04:35.198  "dma_device_type": 2
00:04:35.198  }
00:04:35.198  ],
00:04:35.198  "driver_specific": {}
00:04:35.198  },
00:04:35.198  {
00:04:35.198  "name": "Passthru0",
00:04:35.198  "aliases": [
00:04:35.198  "d0ef1f1b-7a36-5ef0-a515-2fc6dfe5af4f"
00:04:35.198  ],
00:04:35.198  "product_name": "passthru",
00:04:35.198  "block_size": 512,
00:04:35.198  "num_blocks": 16384,
00:04:35.198  "uuid": "d0ef1f1b-7a36-5ef0-a515-2fc6dfe5af4f",
00:04:35.198  "assigned_rate_limits": {
00:04:35.198  "rw_ios_per_sec": 0,
00:04:35.198  "rw_mbytes_per_sec": 0,
00:04:35.198  "r_mbytes_per_sec": 0,
00:04:35.198  "w_mbytes_per_sec": 0
00:04:35.198  },
00:04:35.198  "claimed": false,
00:04:35.198  "zoned": false,
00:04:35.198  "supported_io_types": {
00:04:35.198  "read": true,
00:04:35.198  "write": true,
00:04:35.198  "unmap": true,
00:04:35.198  "flush": true,
00:04:35.198  "reset": true,
00:04:35.198  "nvme_admin": false,
00:04:35.198  "nvme_io": false,
00:04:35.198  "nvme_io_md": false,
00:04:35.198  "write_zeroes": true,
00:04:35.198  "zcopy": true,
00:04:35.198  "get_zone_info": false,
00:04:35.198  "zone_management": false,
00:04:35.198  "zone_append": false,
00:04:35.198  "compare": false,
00:04:35.198  "compare_and_write": false,
00:04:35.198  "abort": true,
00:04:35.198  "seek_hole": false,
00:04:35.198  "seek_data": false,
00:04:35.198  "copy": true,
00:04:35.198  "nvme_iov_md": false
00:04:35.198  },
00:04:35.198  "memory_domains": [
00:04:35.198  {
00:04:35.198  "dma_device_id": "system",
00:04:35.198  "dma_device_type": 1
00:04:35.198  },
00:04:35.198  {
00:04:35.198  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:04:35.198  "dma_device_type": 2
00:04:35.198  }
00:04:35.198  ],
00:04:35.198  "driver_specific": {
00:04:35.198  "passthru": {
00:04:35.198  "name": "Passthru0",
00:04:35.198  "base_bdev_name": "Malloc0"
00:04:35.198  }
00:04:35.198  }
00:04:35.198  }
00:04:35.198  ]'
00:04:35.198    13:30:34 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length
00:04:35.198   13:30:34 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']'
00:04:35.198   13:30:34 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0
00:04:35.198   13:30:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:35.198   13:30:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:35.198   13:30:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:35.198   13:30:34 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0
00:04:35.198   13:30:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:35.198   13:30:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:35.198   13:30:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:35.198    13:30:34 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs
00:04:35.198    13:30:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:35.198    13:30:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:35.198    13:30:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:35.198   13:30:34 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]'
00:04:35.198    13:30:34 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length
00:04:35.198   13:30:34 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']'
00:04:35.198  
00:04:35.198  real	0m0.291s
00:04:35.198  user	0m0.158s
00:04:35.198  sys	0m0.041s
00:04:35.198   13:30:34 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:35.198   13:30:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:35.198  ************************************
00:04:35.198  END TEST rpc_integrity
00:04:35.198  ************************************
00:04:35.458   13:30:34 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins
00:04:35.458   13:30:34 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:35.458   13:30:34 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:35.458   13:30:34 rpc -- common/autotest_common.sh@10 -- # set +x
00:04:35.458  ************************************
00:04:35.458  START TEST rpc_plugins
00:04:35.458  ************************************
00:04:35.458   13:30:34 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins
00:04:35.458    13:30:34 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc
00:04:35.458    13:30:34 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:35.458    13:30:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:04:35.458    13:30:34 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:35.458   13:30:34 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1
00:04:35.458    13:30:34 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs
00:04:35.458    13:30:34 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:35.458    13:30:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:04:35.458    13:30:35 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:35.458   13:30:35 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[
00:04:35.458  {
00:04:35.458  "name": "Malloc1",
00:04:35.458  "aliases": [
00:04:35.458  "11205fdc-020d-441c-99b0-5beba4f6c280"
00:04:35.458  ],
00:04:35.458  "product_name": "Malloc disk",
00:04:35.458  "block_size": 4096,
00:04:35.458  "num_blocks": 256,
00:04:35.458  "uuid": "11205fdc-020d-441c-99b0-5beba4f6c280",
00:04:35.458  "assigned_rate_limits": {
00:04:35.458  "rw_ios_per_sec": 0,
00:04:35.458  "rw_mbytes_per_sec": 0,
00:04:35.458  "r_mbytes_per_sec": 0,
00:04:35.458  "w_mbytes_per_sec": 0
00:04:35.458  },
00:04:35.458  "claimed": false,
00:04:35.458  "zoned": false,
00:04:35.458  "supported_io_types": {
00:04:35.458  "read": true,
00:04:35.458  "write": true,
00:04:35.458  "unmap": true,
00:04:35.458  "flush": true,
00:04:35.458  "reset": true,
00:04:35.458  "nvme_admin": false,
00:04:35.458  "nvme_io": false,
00:04:35.458  "nvme_io_md": false,
00:04:35.458  "write_zeroes": true,
00:04:35.458  "zcopy": true,
00:04:35.458  "get_zone_info": false,
00:04:35.458  "zone_management": false,
00:04:35.458  "zone_append": false,
00:04:35.458  "compare": false,
00:04:35.458  "compare_and_write": false,
00:04:35.458  "abort": true,
00:04:35.458  "seek_hole": false,
00:04:35.458  "seek_data": false,
00:04:35.458  "copy": true,
00:04:35.458  "nvme_iov_md": false
00:04:35.458  },
00:04:35.458  "memory_domains": [
00:04:35.458  {
00:04:35.458  "dma_device_id": "system",
00:04:35.458  "dma_device_type": 1
00:04:35.458  },
00:04:35.458  {
00:04:35.458  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:04:35.458  "dma_device_type": 2
00:04:35.458  }
00:04:35.458  ],
00:04:35.458  "driver_specific": {}
00:04:35.458  }
00:04:35.458  ]'
00:04:35.458    13:30:35 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length
00:04:35.458   13:30:35 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']'
00:04:35.458   13:30:35 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1
00:04:35.458   13:30:35 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:35.458   13:30:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:04:35.458   13:30:35 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:35.458    13:30:35 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs
00:04:35.458    13:30:35 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:35.458    13:30:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:04:35.458    13:30:35 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:35.458   13:30:35 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]'
00:04:35.458    13:30:35 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length
00:04:35.458   13:30:35 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']'
00:04:35.458  
00:04:35.458  real	0m0.142s
00:04:35.458  user	0m0.083s
00:04:35.458  sys	0m0.022s
00:04:35.458   13:30:35 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:35.458   13:30:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:04:35.458  ************************************
00:04:35.458  END TEST rpc_plugins
00:04:35.458  ************************************
00:04:35.458   13:30:35 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test
00:04:35.458   13:30:35 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:35.458   13:30:35 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:35.458   13:30:35 rpc -- common/autotest_common.sh@10 -- # set +x
00:04:35.458  ************************************
00:04:35.458  START TEST rpc_trace_cmd_test
00:04:35.458  ************************************
00:04:35.458   13:30:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test
00:04:35.458   13:30:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info
00:04:35.458    13:30:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info
00:04:35.716    13:30:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:35.716    13:30:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x
00:04:35.716    13:30:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:35.716   13:30:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{
00:04:35.716  "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3111492",
00:04:35.716  "tpoint_group_mask": "0x8",
00:04:35.716  "iscsi_conn": {
00:04:35.716  "mask": "0x2",
00:04:35.716  "tpoint_mask": "0x0"
00:04:35.716  },
00:04:35.716  "scsi": {
00:04:35.716  "mask": "0x4",
00:04:35.716  "tpoint_mask": "0x0"
00:04:35.716  },
00:04:35.716  "bdev": {
00:04:35.716  "mask": "0x8",
00:04:35.716  "tpoint_mask": "0xffffffffffffffff"
00:04:35.716  },
00:04:35.716  "nvmf_rdma": {
00:04:35.716  "mask": "0x10",
00:04:35.716  "tpoint_mask": "0x0"
00:04:35.716  },
00:04:35.716  "nvmf_tcp": {
00:04:35.717  "mask": "0x20",
00:04:35.717  "tpoint_mask": "0x0"
00:04:35.717  },
00:04:35.717  "ftl": {
00:04:35.717  "mask": "0x40",
00:04:35.717  "tpoint_mask": "0x0"
00:04:35.717  },
00:04:35.717  "blobfs": {
00:04:35.717  "mask": "0x80",
00:04:35.717  "tpoint_mask": "0x0"
00:04:35.717  },
00:04:35.717  "dsa": {
00:04:35.717  "mask": "0x200",
00:04:35.717  "tpoint_mask": "0x0"
00:04:35.717  },
00:04:35.717  "thread": {
00:04:35.717  "mask": "0x400",
00:04:35.717  "tpoint_mask": "0x0"
00:04:35.717  },
00:04:35.717  "nvme_pcie": {
00:04:35.717  "mask": "0x800",
00:04:35.717  "tpoint_mask": "0x0"
00:04:35.717  },
00:04:35.717  "iaa": {
00:04:35.717  "mask": "0x1000",
00:04:35.717  "tpoint_mask": "0x0"
00:04:35.717  },
00:04:35.717  "nvme_tcp": {
00:04:35.717  "mask": "0x2000",
00:04:35.717  "tpoint_mask": "0x0"
00:04:35.717  },
00:04:35.717  "bdev_nvme": {
00:04:35.717  "mask": "0x4000",
00:04:35.717  "tpoint_mask": "0x0"
00:04:35.717  },
00:04:35.717  "sock": {
00:04:35.717  "mask": "0x8000",
00:04:35.717  "tpoint_mask": "0x0"
00:04:35.717  },
00:04:35.717  "blob": {
00:04:35.717  "mask": "0x10000",
00:04:35.717  "tpoint_mask": "0x0"
00:04:35.717  },
00:04:35.717  "bdev_raid": {
00:04:35.717  "mask": "0x20000",
00:04:35.717  "tpoint_mask": "0x0"
00:04:35.717  },
00:04:35.717  "scheduler": {
00:04:35.717  "mask": "0x40000",
00:04:35.717  "tpoint_mask": "0x0"
00:04:35.717  }
00:04:35.717  }'
00:04:35.717    13:30:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length
00:04:35.717   13:30:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']'
00:04:35.717    13:30:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")'
00:04:35.717   13:30:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']'
00:04:35.717    13:30:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")'
00:04:35.717   13:30:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']'
00:04:35.717    13:30:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")'
00:04:35.717   13:30:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']'
00:04:35.717    13:30:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask
00:04:35.717   13:30:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']'
00:04:35.717  
00:04:35.717  real	0m0.212s
00:04:35.717  user	0m0.175s
00:04:35.717  sys	0m0.028s
00:04:35.717   13:30:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:35.717   13:30:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x
00:04:35.717  ************************************
00:04:35.717  END TEST rpc_trace_cmd_test
00:04:35.717  ************************************
00:04:35.717   13:30:35 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]]
00:04:35.717   13:30:35 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd
00:04:35.717   13:30:35 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity
00:04:35.717   13:30:35 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:35.717   13:30:35 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:35.717   13:30:35 rpc -- common/autotest_common.sh@10 -- # set +x
00:04:35.976  ************************************
00:04:35.976  START TEST rpc_daemon_integrity
00:04:35.976  ************************************
00:04:35.976   13:30:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity
00:04:35.976    13:30:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs
00:04:35.976    13:30:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:35.976    13:30:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:35.976    13:30:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:35.976   13:30:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]'
00:04:35.976    13:30:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length
00:04:35.976   13:30:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']'
00:04:35.976    13:30:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512
00:04:35.976    13:30:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:35.976    13:30:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:35.976    13:30:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:35.976   13:30:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2
00:04:35.976    13:30:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs
00:04:35.976    13:30:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:35.976    13:30:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:35.976    13:30:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:35.976   13:30:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[
00:04:35.976  {
00:04:35.976  "name": "Malloc2",
00:04:35.976  "aliases": [
00:04:35.976  "4f917fae-031a-47e3-ab21-31117cbd48e7"
00:04:35.976  ],
00:04:35.976  "product_name": "Malloc disk",
00:04:35.976  "block_size": 512,
00:04:35.976  "num_blocks": 16384,
00:04:35.976  "uuid": "4f917fae-031a-47e3-ab21-31117cbd48e7",
00:04:35.976  "assigned_rate_limits": {
00:04:35.976  "rw_ios_per_sec": 0,
00:04:35.976  "rw_mbytes_per_sec": 0,
00:04:35.976  "r_mbytes_per_sec": 0,
00:04:35.976  "w_mbytes_per_sec": 0
00:04:35.976  },
00:04:35.976  "claimed": false,
00:04:35.976  "zoned": false,
00:04:35.976  "supported_io_types": {
00:04:35.976  "read": true,
00:04:35.976  "write": true,
00:04:35.976  "unmap": true,
00:04:35.976  "flush": true,
00:04:35.976  "reset": true,
00:04:35.976  "nvme_admin": false,
00:04:35.976  "nvme_io": false,
00:04:35.976  "nvme_io_md": false,
00:04:35.976  "write_zeroes": true,
00:04:35.976  "zcopy": true,
00:04:35.976  "get_zone_info": false,
00:04:35.976  "zone_management": false,
00:04:35.976  "zone_append": false,
00:04:35.976  "compare": false,
00:04:35.976  "compare_and_write": false,
00:04:35.976  "abort": true,
00:04:35.976  "seek_hole": false,
00:04:35.976  "seek_data": false,
00:04:35.976  "copy": true,
00:04:35.976  "nvme_iov_md": false
00:04:35.976  },
00:04:35.976  "memory_domains": [
00:04:35.976  {
00:04:35.976  "dma_device_id": "system",
00:04:35.976  "dma_device_type": 1
00:04:35.976  },
00:04:35.976  {
00:04:35.976  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:04:35.976  "dma_device_type": 2
00:04:35.976  }
00:04:35.976  ],
00:04:35.976  "driver_specific": {}
00:04:35.976  }
00:04:35.976  ]'
00:04:35.976    13:30:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length
00:04:35.976   13:30:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']'
00:04:35.976   13:30:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0
00:04:35.976   13:30:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:35.976   13:30:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:35.976  [2024-12-14 13:30:35.614730] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2
00:04:35.976  [2024-12-14 13:30:35.614769] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:04:35.976  [2024-12-14 13:30:35.614790] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000022880
00:04:35.976  [2024-12-14 13:30:35.614802] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed
00:04:35.976  [2024-12-14 13:30:35.616893] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:04:35.976  [2024-12-14 13:30:35.616920] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0
00:04:35.976  Passthru0
00:04:35.976   13:30:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:35.976    13:30:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs
00:04:35.976    13:30:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:35.976    13:30:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:35.976    13:30:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:35.976   13:30:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[
00:04:35.976  {
00:04:35.976  "name": "Malloc2",
00:04:35.976  "aliases": [
00:04:35.976  "4f917fae-031a-47e3-ab21-31117cbd48e7"
00:04:35.976  ],
00:04:35.976  "product_name": "Malloc disk",
00:04:35.976  "block_size": 512,
00:04:35.976  "num_blocks": 16384,
00:04:35.976  "uuid": "4f917fae-031a-47e3-ab21-31117cbd48e7",
00:04:35.976  "assigned_rate_limits": {
00:04:35.976  "rw_ios_per_sec": 0,
00:04:35.976  "rw_mbytes_per_sec": 0,
00:04:35.976  "r_mbytes_per_sec": 0,
00:04:35.976  "w_mbytes_per_sec": 0
00:04:35.976  },
00:04:35.976  "claimed": true,
00:04:35.976  "claim_type": "exclusive_write",
00:04:35.976  "zoned": false,
00:04:35.976  "supported_io_types": {
00:04:35.976  "read": true,
00:04:35.976  "write": true,
00:04:35.976  "unmap": true,
00:04:35.976  "flush": true,
00:04:35.976  "reset": true,
00:04:35.976  "nvme_admin": false,
00:04:35.976  "nvme_io": false,
00:04:35.976  "nvme_io_md": false,
00:04:35.976  "write_zeroes": true,
00:04:35.976  "zcopy": true,
00:04:35.976  "get_zone_info": false,
00:04:35.976  "zone_management": false,
00:04:35.976  "zone_append": false,
00:04:35.976  "compare": false,
00:04:35.976  "compare_and_write": false,
00:04:35.976  "abort": true,
00:04:35.976  "seek_hole": false,
00:04:35.976  "seek_data": false,
00:04:35.976  "copy": true,
00:04:35.976  "nvme_iov_md": false
00:04:35.976  },
00:04:35.976  "memory_domains": [
00:04:35.976  {
00:04:35.976  "dma_device_id": "system",
00:04:35.976  "dma_device_type": 1
00:04:35.976  },
00:04:35.976  {
00:04:35.976  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:04:35.976  "dma_device_type": 2
00:04:35.976  }
00:04:35.976  ],
00:04:35.976  "driver_specific": {}
00:04:35.976  },
00:04:35.976  {
00:04:35.976  "name": "Passthru0",
00:04:35.976  "aliases": [
00:04:35.976  "10d91973-2281-515b-b41d-aa7a40554efd"
00:04:35.976  ],
00:04:35.976  "product_name": "passthru",
00:04:35.976  "block_size": 512,
00:04:35.976  "num_blocks": 16384,
00:04:35.976  "uuid": "10d91973-2281-515b-b41d-aa7a40554efd",
00:04:35.976  "assigned_rate_limits": {
00:04:35.976  "rw_ios_per_sec": 0,
00:04:35.976  "rw_mbytes_per_sec": 0,
00:04:35.976  "r_mbytes_per_sec": 0,
00:04:35.976  "w_mbytes_per_sec": 0
00:04:35.976  },
00:04:35.976  "claimed": false,
00:04:35.976  "zoned": false,
00:04:35.976  "supported_io_types": {
00:04:35.976  "read": true,
00:04:35.976  "write": true,
00:04:35.976  "unmap": true,
00:04:35.976  "flush": true,
00:04:35.976  "reset": true,
00:04:35.976  "nvme_admin": false,
00:04:35.976  "nvme_io": false,
00:04:35.976  "nvme_io_md": false,
00:04:35.976  "write_zeroes": true,
00:04:35.976  "zcopy": true,
00:04:35.976  "get_zone_info": false,
00:04:35.976  "zone_management": false,
00:04:35.976  "zone_append": false,
00:04:35.976  "compare": false,
00:04:35.976  "compare_and_write": false,
00:04:35.976  "abort": true,
00:04:35.976  "seek_hole": false,
00:04:35.976  "seek_data": false,
00:04:35.976  "copy": true,
00:04:35.976  "nvme_iov_md": false
00:04:35.976  },
00:04:35.976  "memory_domains": [
00:04:35.976  {
00:04:35.976  "dma_device_id": "system",
00:04:35.976  "dma_device_type": 1
00:04:35.976  },
00:04:35.976  {
00:04:35.976  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:04:35.976  "dma_device_type": 2
00:04:35.976  }
00:04:35.976  ],
00:04:35.976  "driver_specific": {
00:04:35.976  "passthru": {
00:04:35.977  "name": "Passthru0",
00:04:35.977  "base_bdev_name": "Malloc2"
00:04:35.977  }
00:04:35.977  }
00:04:35.977  }
00:04:35.977  ]'
00:04:35.977    13:30:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length
00:04:35.977   13:30:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']'
00:04:35.977   13:30:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0
00:04:35.977   13:30:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:35.977   13:30:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:35.977   13:30:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:35.977   13:30:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2
00:04:35.977   13:30:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:35.977   13:30:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:36.236   13:30:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:36.236    13:30:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs
00:04:36.236    13:30:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:36.236    13:30:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:36.236    13:30:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:36.236   13:30:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]'
00:04:36.236    13:30:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length
00:04:36.236   13:30:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']'
00:04:36.236  
00:04:36.236  real	0m0.292s
00:04:36.236  user	0m0.164s
00:04:36.236  sys	0m0.037s
00:04:36.236   13:30:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:36.236   13:30:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:36.236  ************************************
00:04:36.236  END TEST rpc_daemon_integrity
00:04:36.236  ************************************
00:04:36.236   13:30:35 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT
00:04:36.236   13:30:35 rpc -- rpc/rpc.sh@84 -- # killprocess 3111492
00:04:36.236   13:30:35 rpc -- common/autotest_common.sh@954 -- # '[' -z 3111492 ']'
00:04:36.236   13:30:35 rpc -- common/autotest_common.sh@958 -- # kill -0 3111492
00:04:36.236    13:30:35 rpc -- common/autotest_common.sh@959 -- # uname
00:04:36.236   13:30:35 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:04:36.236    13:30:35 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3111492
00:04:36.236   13:30:35 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:04:36.236   13:30:35 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:04:36.236   13:30:35 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3111492'
00:04:36.236  killing process with pid 3111492
00:04:36.236   13:30:35 rpc -- common/autotest_common.sh@973 -- # kill 3111492
00:04:36.236   13:30:35 rpc -- common/autotest_common.sh@978 -- # wait 3111492
00:04:38.772  
00:04:38.772  real	0m4.703s
00:04:38.772  user	0m5.189s
00:04:38.772  sys	0m0.942s
00:04:38.772   13:30:38 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:38.772   13:30:38 rpc -- common/autotest_common.sh@10 -- # set +x
00:04:38.772  ************************************
00:04:38.772  END TEST rpc
00:04:38.772  ************************************
00:04:38.772   13:30:38  -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh
00:04:38.772   13:30:38  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:38.772   13:30:38  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:38.772   13:30:38  -- common/autotest_common.sh@10 -- # set +x
00:04:38.772  ************************************
00:04:38.772  START TEST skip_rpc
00:04:38.772  ************************************
00:04:38.772   13:30:38 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh
00:04:38.772  * Looking for test storage...
00:04:38.772  * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc
00:04:38.772    13:30:38 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:04:38.772     13:30:38 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version
00:04:38.772     13:30:38 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:04:38.772    13:30:38 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:04:38.772    13:30:38 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:04:38.772    13:30:38 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:04:38.772    13:30:38 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:04:38.772    13:30:38 skip_rpc -- scripts/common.sh@336 -- # IFS=.-:
00:04:38.772    13:30:38 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1
00:04:38.772    13:30:38 skip_rpc -- scripts/common.sh@337 -- # IFS=.-:
00:04:38.772    13:30:38 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2
00:04:38.772    13:30:38 skip_rpc -- scripts/common.sh@338 -- # local 'op=<'
00:04:38.772    13:30:38 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2
00:04:38.773    13:30:38 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1
00:04:38.773    13:30:38 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:04:38.773    13:30:38 skip_rpc -- scripts/common.sh@344 -- # case "$op" in
00:04:38.773    13:30:38 skip_rpc -- scripts/common.sh@345 -- # : 1
00:04:38.773    13:30:38 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:04:38.773    13:30:38 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:38.773     13:30:38 skip_rpc -- scripts/common.sh@365 -- # decimal 1
00:04:38.773     13:30:38 skip_rpc -- scripts/common.sh@353 -- # local d=1
00:04:38.773     13:30:38 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:38.773     13:30:38 skip_rpc -- scripts/common.sh@355 -- # echo 1
00:04:38.773    13:30:38 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:04:38.773     13:30:38 skip_rpc -- scripts/common.sh@366 -- # decimal 2
00:04:38.773     13:30:38 skip_rpc -- scripts/common.sh@353 -- # local d=2
00:04:38.773     13:30:38 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:38.773     13:30:38 skip_rpc -- scripts/common.sh@355 -- # echo 2
00:04:38.773    13:30:38 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:04:38.773    13:30:38 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:04:38.773    13:30:38 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:04:38.773    13:30:38 skip_rpc -- scripts/common.sh@368 -- # return 0
00:04:38.773    13:30:38 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:38.773    13:30:38 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:04:38.773  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:38.773  		--rc genhtml_branch_coverage=1
00:04:38.773  		--rc genhtml_function_coverage=1
00:04:38.773  		--rc genhtml_legend=1
00:04:38.773  		--rc geninfo_all_blocks=1
00:04:38.773  		--rc geninfo_unexecuted_blocks=1
00:04:38.773  		
00:04:38.773  		'
00:04:38.773    13:30:38 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:04:38.773  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:38.773  		--rc genhtml_branch_coverage=1
00:04:38.773  		--rc genhtml_function_coverage=1
00:04:38.773  		--rc genhtml_legend=1
00:04:38.773  		--rc geninfo_all_blocks=1
00:04:38.773  		--rc geninfo_unexecuted_blocks=1
00:04:38.773  		
00:04:38.773  		'
00:04:38.773    13:30:38 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:04:38.773  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:38.773  		--rc genhtml_branch_coverage=1
00:04:38.773  		--rc genhtml_function_coverage=1
00:04:38.773  		--rc genhtml_legend=1
00:04:38.773  		--rc geninfo_all_blocks=1
00:04:38.773  		--rc geninfo_unexecuted_blocks=1
00:04:38.773  		
00:04:38.773  		'
00:04:38.773    13:30:38 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:04:38.773  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:38.773  		--rc genhtml_branch_coverage=1
00:04:38.773  		--rc genhtml_function_coverage=1
00:04:38.773  		--rc genhtml_legend=1
00:04:38.773  		--rc geninfo_all_blocks=1
00:04:38.773  		--rc geninfo_unexecuted_blocks=1
00:04:38.773  		
00:04:38.773  		'
00:04:38.773   13:30:38 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json
00:04:38.773   13:30:38 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt
00:04:38.773   13:30:38 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc
00:04:38.773   13:30:38 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:38.773   13:30:38 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:38.773   13:30:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:04:38.773  ************************************
00:04:38.773  START TEST skip_rpc
00:04:38.773  ************************************
00:04:38.773   13:30:38 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc
00:04:38.773   13:30:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3112480
00:04:38.773   13:30:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:04:38.773   13:30:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5
00:04:38.773   13:30:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1
00:04:38.773  [2024-12-14 13:30:38.461665] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:04:38.773  [2024-12-14 13:30:38.461751] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3112480 ]
00:04:39.032  [2024-12-14 13:30:38.590188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:04:39.032  [2024-12-14 13:30:38.683687] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:04:44.305   13:30:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version
00:04:44.305   13:30:43 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0
00:04:44.306   13:30:43 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version
00:04:44.306   13:30:43 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:04:44.306   13:30:43 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:04:44.306    13:30:43 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:04:44.306   13:30:43 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:04:44.306   13:30:43 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version
00:04:44.306   13:30:43 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:44.306   13:30:43 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:04:44.306   13:30:43 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:04:44.306   13:30:43 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1
00:04:44.306   13:30:43 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:04:44.306   13:30:43 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:04:44.306   13:30:43 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:04:44.306   13:30:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT
00:04:44.306   13:30:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3112480
00:04:44.306   13:30:43 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 3112480 ']'
00:04:44.306   13:30:43 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 3112480
00:04:44.306    13:30:43 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname
00:04:44.306   13:30:43 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:04:44.306    13:30:43 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3112480
00:04:44.306   13:30:43 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:04:44.306   13:30:43 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:04:44.306   13:30:43 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3112480'
00:04:44.306  killing process with pid 3112480
00:04:44.306   13:30:43 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 3112480
00:04:44.306   13:30:43 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 3112480
00:04:46.210  
00:04:46.210  real	0m7.261s
00:04:46.210  user	0m6.859s
00:04:46.210  sys	0m0.432s
00:04:46.210   13:30:45 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:46.210   13:30:45 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:04:46.210  ************************************
00:04:46.210  END TEST skip_rpc
00:04:46.211  ************************************
00:04:46.211   13:30:45 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json
00:04:46.211   13:30:45 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:46.211   13:30:45 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:46.211   13:30:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:04:46.211  ************************************
00:04:46.211  START TEST skip_rpc_with_json
00:04:46.211  ************************************
00:04:46.211   13:30:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json
00:04:46.211   13:30:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config
00:04:46.211   13:30:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3113837
00:04:46.211   13:30:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:04:46.211   13:30:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3113837
00:04:46.211   13:30:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 3113837 ']'
00:04:46.211   13:30:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:04:46.211   13:30:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100
00:04:46.211   13:30:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:04:46.211  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:04:46.211   13:30:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable
00:04:46.211   13:30:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:04:46.211   13:30:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1
00:04:46.211  [2024-12-14 13:30:45.799136] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:04:46.211  [2024-12-14 13:30:45.799250] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3113837 ]
00:04:46.211  [2024-12-14 13:30:45.930391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:04:46.470  [2024-12-14 13:30:46.025808] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:04:47.039   13:30:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:04:47.039   13:30:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0
00:04:47.039   13:30:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp
00:04:47.039   13:30:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:47.039   13:30:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:04:47.039  [2024-12-14 13:30:46.761190] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist
00:04:47.039  request:
00:04:47.039  {
00:04:47.039  "trtype": "tcp",
00:04:47.039  "method": "nvmf_get_transports",
00:04:47.039  "req_id": 1
00:04:47.039  }
00:04:47.039  Got JSON-RPC error response
00:04:47.039  response:
00:04:47.039  {
00:04:47.039  "code": -19,
00:04:47.039  "message": "No such device"
00:04:47.039  }
00:04:47.039   13:30:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:04:47.039   13:30:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp
00:04:47.039   13:30:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:47.039   13:30:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:04:47.039  [2024-12-14 13:30:46.769293] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:04:47.039   13:30:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:47.039   13:30:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config
00:04:47.039   13:30:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:47.039   13:30:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:04:47.298   13:30:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:47.298   13:30:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json
00:04:47.298  {
00:04:47.298  "subsystems": [
00:04:47.298  {
00:04:47.298  "subsystem": "fsdev",
00:04:47.298  "config": [
00:04:47.298  {
00:04:47.298  "method": "fsdev_set_opts",
00:04:47.298  "params": {
00:04:47.298  "fsdev_io_pool_size": 65535,
00:04:47.298  "fsdev_io_cache_size": 256
00:04:47.298  }
00:04:47.298  }
00:04:47.298  ]
00:04:47.298  },
00:04:47.298  {
00:04:47.298  "subsystem": "keyring",
00:04:47.298  "config": []
00:04:47.298  },
00:04:47.298  {
00:04:47.298  "subsystem": "iobuf",
00:04:47.298  "config": [
00:04:47.299  {
00:04:47.299  "method": "iobuf_set_options",
00:04:47.299  "params": {
00:04:47.299  "small_pool_count": 8192,
00:04:47.299  "large_pool_count": 1024,
00:04:47.299  "small_bufsize": 8192,
00:04:47.299  "large_bufsize": 135168,
00:04:47.299  "enable_numa": false
00:04:47.299  }
00:04:47.299  }
00:04:47.299  ]
00:04:47.299  },
00:04:47.299  {
00:04:47.299  "subsystem": "sock",
00:04:47.299  "config": [
00:04:47.299  {
00:04:47.299  "method": "sock_set_default_impl",
00:04:47.299  "params": {
00:04:47.299  "impl_name": "posix"
00:04:47.299  }
00:04:47.299  },
00:04:47.299  {
00:04:47.299  "method": "sock_impl_set_options",
00:04:47.299  "params": {
00:04:47.299  "impl_name": "ssl",
00:04:47.299  "recv_buf_size": 4096,
00:04:47.299  "send_buf_size": 4096,
00:04:47.299  "enable_recv_pipe": true,
00:04:47.299  "enable_quickack": false,
00:04:47.299  "enable_placement_id": 0,
00:04:47.299  "enable_zerocopy_send_server": true,
00:04:47.299  "enable_zerocopy_send_client": false,
00:04:47.299  "zerocopy_threshold": 0,
00:04:47.299  "tls_version": 0,
00:04:47.299  "enable_ktls": false
00:04:47.299  }
00:04:47.299  },
00:04:47.299  {
00:04:47.299  "method": "sock_impl_set_options",
00:04:47.299  "params": {
00:04:47.299  "impl_name": "posix",
00:04:47.299  "recv_buf_size": 2097152,
00:04:47.299  "send_buf_size": 2097152,
00:04:47.299  "enable_recv_pipe": true,
00:04:47.299  "enable_quickack": false,
00:04:47.299  "enable_placement_id": 0,
00:04:47.299  "enable_zerocopy_send_server": true,
00:04:47.299  "enable_zerocopy_send_client": false,
00:04:47.299  "zerocopy_threshold": 0,
00:04:47.299  "tls_version": 0,
00:04:47.299  "enable_ktls": false
00:04:47.299  }
00:04:47.299  }
00:04:47.299  ]
00:04:47.299  },
00:04:47.299  {
00:04:47.299  "subsystem": "vmd",
00:04:47.299  "config": []
00:04:47.299  },
00:04:47.299  {
00:04:47.299  "subsystem": "accel",
00:04:47.299  "config": [
00:04:47.299  {
00:04:47.299  "method": "accel_set_options",
00:04:47.299  "params": {
00:04:47.299  "small_cache_size": 128,
00:04:47.299  "large_cache_size": 16,
00:04:47.299  "task_count": 2048,
00:04:47.299  "sequence_count": 2048,
00:04:47.299  "buf_count": 2048
00:04:47.299  }
00:04:47.299  }
00:04:47.299  ]
00:04:47.299  },
00:04:47.299  {
00:04:47.299  "subsystem": "bdev",
00:04:47.299  "config": [
00:04:47.299  {
00:04:47.299  "method": "bdev_set_options",
00:04:47.299  "params": {
00:04:47.299  "bdev_io_pool_size": 65535,
00:04:47.299  "bdev_io_cache_size": 256,
00:04:47.299  "bdev_auto_examine": true,
00:04:47.299  "iobuf_small_cache_size": 128,
00:04:47.299  "iobuf_large_cache_size": 16
00:04:47.299  }
00:04:47.299  },
00:04:47.299  {
00:04:47.299  "method": "bdev_raid_set_options",
00:04:47.299  "params": {
00:04:47.299  "process_window_size_kb": 1024,
00:04:47.299  "process_max_bandwidth_mb_sec": 0
00:04:47.299  }
00:04:47.299  },
00:04:47.299  {
00:04:47.299  "method": "bdev_iscsi_set_options",
00:04:47.299  "params": {
00:04:47.299  "timeout_sec": 30
00:04:47.299  }
00:04:47.299  },
00:04:47.299  {
00:04:47.299  "method": "bdev_nvme_set_options",
00:04:47.299  "params": {
00:04:47.299  "action_on_timeout": "none",
00:04:47.299  "timeout_us": 0,
00:04:47.299  "timeout_admin_us": 0,
00:04:47.299  "keep_alive_timeout_ms": 10000,
00:04:47.299  "arbitration_burst": 0,
00:04:47.299  "low_priority_weight": 0,
00:04:47.299  "medium_priority_weight": 0,
00:04:47.299  "high_priority_weight": 0,
00:04:47.299  "nvme_adminq_poll_period_us": 10000,
00:04:47.299  "nvme_ioq_poll_period_us": 0,
00:04:47.299  "io_queue_requests": 0,
00:04:47.299  "delay_cmd_submit": true,
00:04:47.299  "transport_retry_count": 4,
00:04:47.299  "bdev_retry_count": 3,
00:04:47.299  "transport_ack_timeout": 0,
00:04:47.299  "ctrlr_loss_timeout_sec": 0,
00:04:47.299  "reconnect_delay_sec": 0,
00:04:47.299  "fast_io_fail_timeout_sec": 0,
00:04:47.299  "disable_auto_failback": false,
00:04:47.299  "generate_uuids": false,
00:04:47.299  "transport_tos": 0,
00:04:47.299  "nvme_error_stat": false,
00:04:47.299  "rdma_srq_size": 0,
00:04:47.299  "io_path_stat": false,
00:04:47.299  "allow_accel_sequence": false,
00:04:47.299  "rdma_max_cq_size": 0,
00:04:47.299  "rdma_cm_event_timeout_ms": 0,
00:04:47.299  "dhchap_digests": [
00:04:47.299  "sha256",
00:04:47.299  "sha384",
00:04:47.299  "sha512"
00:04:47.299  ],
00:04:47.299  "dhchap_dhgroups": [
00:04:47.299  "null",
00:04:47.299  "ffdhe2048",
00:04:47.299  "ffdhe3072",
00:04:47.299  "ffdhe4096",
00:04:47.299  "ffdhe6144",
00:04:47.299  "ffdhe8192"
00:04:47.299  ],
00:04:47.299  "rdma_umr_per_io": false
00:04:47.299  }
00:04:47.299  },
00:04:47.299  {
00:04:47.299  "method": "bdev_nvme_set_hotplug",
00:04:47.299  "params": {
00:04:47.299  "period_us": 100000,
00:04:47.299  "enable": false
00:04:47.299  }
00:04:47.299  },
00:04:47.299  {
00:04:47.299  "method": "bdev_wait_for_examine"
00:04:47.299  }
00:04:47.299  ]
00:04:47.299  },
00:04:47.299  {
00:04:47.299  "subsystem": "scsi",
00:04:47.299  "config": null
00:04:47.299  },
00:04:47.299  {
00:04:47.299  "subsystem": "scheduler",
00:04:47.299  "config": [
00:04:47.299  {
00:04:47.299  "method": "framework_set_scheduler",
00:04:47.299  "params": {
00:04:47.299  "name": "static"
00:04:47.299  }
00:04:47.299  }
00:04:47.299  ]
00:04:47.299  },
00:04:47.299  {
00:04:47.299  "subsystem": "vhost_scsi",
00:04:47.299  "config": []
00:04:47.299  },
00:04:47.299  {
00:04:47.299  "subsystem": "vhost_blk",
00:04:47.299  "config": []
00:04:47.299  },
00:04:47.299  {
00:04:47.299  "subsystem": "ublk",
00:04:47.299  "config": []
00:04:47.299  },
00:04:47.299  {
00:04:47.299  "subsystem": "nbd",
00:04:47.299  "config": []
00:04:47.299  },
00:04:47.299  {
00:04:47.299  "subsystem": "nvmf",
00:04:47.299  "config": [
00:04:47.299  {
00:04:47.299  "method": "nvmf_set_config",
00:04:47.299  "params": {
00:04:47.299  "discovery_filter": "match_any",
00:04:47.299  "admin_cmd_passthru": {
00:04:47.299  "identify_ctrlr": false
00:04:47.299  },
00:04:47.299  "dhchap_digests": [
00:04:47.299  "sha256",
00:04:47.299  "sha384",
00:04:47.299  "sha512"
00:04:47.299  ],
00:04:47.299  "dhchap_dhgroups": [
00:04:47.299  "null",
00:04:47.299  "ffdhe2048",
00:04:47.299  "ffdhe3072",
00:04:47.299  "ffdhe4096",
00:04:47.299  "ffdhe6144",
00:04:47.299  "ffdhe8192"
00:04:47.299  ]
00:04:47.299  }
00:04:47.299  },
00:04:47.299  {
00:04:47.299  "method": "nvmf_set_max_subsystems",
00:04:47.299  "params": {
00:04:47.299  "max_subsystems": 1024
00:04:47.299  }
00:04:47.299  },
00:04:47.299  {
00:04:47.299  "method": "nvmf_set_crdt",
00:04:47.299  "params": {
00:04:47.299  "crdt1": 0,
00:04:47.299  "crdt2": 0,
00:04:47.299  "crdt3": 0
00:04:47.299  }
00:04:47.299  },
00:04:47.299  {
00:04:47.299  "method": "nvmf_create_transport",
00:04:47.299  "params": {
00:04:47.299  "trtype": "TCP",
00:04:47.299  "max_queue_depth": 128,
00:04:47.299  "max_io_qpairs_per_ctrlr": 127,
00:04:47.299  "in_capsule_data_size": 4096,
00:04:47.299  "max_io_size": 131072,
00:04:47.299  "io_unit_size": 131072,
00:04:47.299  "max_aq_depth": 128,
00:04:47.299  "num_shared_buffers": 511,
00:04:47.299  "buf_cache_size": 4294967295,
00:04:47.299  "dif_insert_or_strip": false,
00:04:47.299  "zcopy": false,
00:04:47.299  "c2h_success": true,
00:04:47.299  "sock_priority": 0,
00:04:47.299  "abort_timeout_sec": 1,
00:04:47.299  "ack_timeout": 0,
00:04:47.299  "data_wr_pool_size": 0
00:04:47.299  }
00:04:47.299  }
00:04:47.299  ]
00:04:47.299  },
00:04:47.299  {
00:04:47.299  "subsystem": "iscsi",
00:04:47.299  "config": [
00:04:47.299  {
00:04:47.299  "method": "iscsi_set_options",
00:04:47.299  "params": {
00:04:47.299  "node_base": "iqn.2016-06.io.spdk",
00:04:47.299  "max_sessions": 128,
00:04:47.299  "max_connections_per_session": 2,
00:04:47.299  "max_queue_depth": 64,
00:04:47.299  "default_time2wait": 2,
00:04:47.299  "default_time2retain": 20,
00:04:47.299  "first_burst_length": 8192,
00:04:47.299  "immediate_data": true,
00:04:47.299  "allow_duplicated_isid": false,
00:04:47.299  "error_recovery_level": 0,
00:04:47.299  "nop_timeout": 60,
00:04:47.299  "nop_in_interval": 30,
00:04:47.299  "disable_chap": false,
00:04:47.299  "require_chap": false,
00:04:47.299  "mutual_chap": false,
00:04:47.299  "chap_group": 0,
00:04:47.299  "max_large_datain_per_connection": 64,
00:04:47.300  "max_r2t_per_connection": 4,
00:04:47.300  "pdu_pool_size": 36864,
00:04:47.300  "immediate_data_pool_size": 16384,
00:04:47.300  "data_out_pool_size": 2048
00:04:47.300  }
00:04:47.300  }
00:04:47.300  ]
00:04:47.300  }
00:04:47.300  ]
00:04:47.300  }
00:04:47.300   13:30:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT
00:04:47.300   13:30:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3113837
00:04:47.300   13:30:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3113837 ']'
00:04:47.300   13:30:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3113837
00:04:47.300    13:30:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname
00:04:47.300   13:30:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:04:47.300    13:30:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3113837
00:04:47.300   13:30:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:04:47.300   13:30:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:04:47.300   13:30:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3113837'
00:04:47.300  killing process with pid 3113837
00:04:47.300   13:30:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3113837
00:04:47.300   13:30:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3113837
00:04:49.837   13:30:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3114385
00:04:49.837   13:30:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5
00:04:49.837   13:30:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json
00:04:55.171   13:30:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3114385
00:04:55.171   13:30:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3114385 ']'
00:04:55.171   13:30:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3114385
00:04:55.171    13:30:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname
00:04:55.171   13:30:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:04:55.171    13:30:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3114385
00:04:55.171   13:30:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:04:55.171   13:30:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:04:55.171   13:30:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3114385'
00:04:55.171  killing process with pid 3114385
00:04:55.171   13:30:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3114385
00:04:55.171   13:30:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3114385
00:04:57.078   13:30:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt
00:04:57.078   13:30:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt
00:04:57.078  
00:04:57.078  real	0m10.698s
00:04:57.078  user	0m10.221s
00:04:57.078  sys	0m0.923s
00:04:57.078   13:30:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:57.078   13:30:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:04:57.078  ************************************
00:04:57.078  END TEST skip_rpc_with_json
00:04:57.078  ************************************
00:04:57.078   13:30:56 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay
00:04:57.078   13:30:56 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:57.078   13:30:56 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:57.078   13:30:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:04:57.078  ************************************
00:04:57.078  START TEST skip_rpc_with_delay
00:04:57.078  ************************************
00:04:57.078   13:30:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay
00:04:57.078   13:30:56 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc
00:04:57.078   13:30:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0
00:04:57.078   13:30:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc
00:04:57.078   13:30:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt
00:04:57.078   13:30:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:04:57.078    13:30:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt
00:04:57.078   13:30:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:04:57.078    13:30:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt
00:04:57.078   13:30:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:04:57.078   13:30:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt
00:04:57.078   13:30:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]]
00:04:57.078   13:30:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc
00:04:57.078  [2024-12-14 13:30:56.569511] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started.
00:04:57.078   13:30:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1
00:04:57.078   13:30:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:04:57.078   13:30:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:04:57.078   13:30:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:04:57.078  
00:04:57.078  real	0m0.150s
00:04:57.078  user	0m0.068s
00:04:57.078  sys	0m0.081s
00:04:57.078   13:30:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:57.078   13:30:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x
00:04:57.078  ************************************
00:04:57.078  END TEST skip_rpc_with_delay
00:04:57.078  ************************************
00:04:57.078    13:30:56 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname
00:04:57.078   13:30:56 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']'
00:04:57.078   13:30:56 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init
00:04:57.078   13:30:56 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:57.078   13:30:56 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:57.078   13:30:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:04:57.078  ************************************
00:04:57.078  START TEST exit_on_failed_rpc_init
00:04:57.078  ************************************
00:04:57.078   13:30:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init
00:04:57.078   13:30:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3115771
00:04:57.078   13:30:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3115771
00:04:57.078   13:30:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 3115771 ']'
00:04:57.078   13:30:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:04:57.078   13:30:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100
00:04:57.078   13:30:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:04:57.078  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:04:57.078   13:30:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable
00:04:57.078   13:30:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x
00:04:57.078   13:30:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1
00:04:57.078  [2024-12-14 13:30:56.797417] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:04:57.078  [2024-12-14 13:30:56.797516] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3115771 ]
00:04:57.338  [2024-12-14 13:30:56.928519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:04:57.338  [2024-12-14 13:30:57.026271] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:04:58.277   13:30:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:04:58.277   13:30:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0
00:04:58.277   13:30:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:04:58.277   13:30:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2
00:04:58.277   13:30:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0
00:04:58.277   13:30:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2
00:04:58.277   13:30:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt
00:04:58.277   13:30:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:04:58.277    13:30:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt
00:04:58.277   13:30:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:04:58.277    13:30:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt
00:04:58.277   13:30:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:04:58.277   13:30:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt
00:04:58.277   13:30:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]]
00:04:58.277   13:30:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2
00:04:58.277  [2024-12-14 13:30:57.835819] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:04:58.277  [2024-12-14 13:30:57.835911] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3115925 ]
00:04:58.277  [2024-12-14 13:30:57.966239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:04:58.536  [2024-12-14 13:30:58.068053] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:04:58.536  [2024-12-14 13:30:58.068145] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another.
00:04:58.536  [2024-12-14 13:30:58.068174] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock
00:04:58.536  [2024-12-14 13:30:58.068188] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:04:58.795   13:30:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234
00:04:58.795   13:30:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:04:58.795   13:30:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106
00:04:58.795   13:30:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in
00:04:58.795   13:30:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1
00:04:58.795   13:30:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:04:58.795   13:30:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT
00:04:58.795   13:30:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3115771
00:04:58.795   13:30:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 3115771 ']'
00:04:58.795   13:30:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 3115771
00:04:58.795    13:30:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname
00:04:58.795   13:30:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:04:58.795    13:30:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3115771
00:04:58.795   13:30:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:04:58.795   13:30:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:04:58.795   13:30:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3115771'
00:04:58.795  killing process with pid 3115771
00:04:58.795   13:30:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 3115771
00:04:58.795   13:30:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 3115771
00:05:01.333  
00:05:01.333  real	0m3.844s
00:05:01.333  user	0m4.141s
00:05:01.333  sys	0m0.639s
00:05:01.333   13:31:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:01.333   13:31:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x
00:05:01.334  ************************************
00:05:01.334  END TEST exit_on_failed_rpc_init
00:05:01.334  ************************************
00:05:01.334   13:31:00 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json
00:05:01.334  
00:05:01.334  real	0m22.438s
00:05:01.334  user	0m21.488s
00:05:01.334  sys	0m2.399s
00:05:01.334   13:31:00 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:01.334   13:31:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:05:01.334  ************************************
00:05:01.334  END TEST skip_rpc
00:05:01.334  ************************************
00:05:01.334   13:31:00  -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh
00:05:01.334   13:31:00  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:01.334   13:31:00  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:01.334   13:31:00  -- common/autotest_common.sh@10 -- # set +x
00:05:01.334  ************************************
00:05:01.334  START TEST rpc_client
00:05:01.334  ************************************
00:05:01.334   13:31:00 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh
00:05:01.334  * Looking for test storage...
00:05:01.334  * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client
00:05:01.334    13:31:00 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:05:01.334     13:31:00 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version
00:05:01.334     13:31:00 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:05:01.334    13:31:00 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:05:01.334    13:31:00 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:01.334    13:31:00 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:01.334    13:31:00 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:01.334    13:31:00 rpc_client -- scripts/common.sh@336 -- # IFS=.-:
00:05:01.334    13:31:00 rpc_client -- scripts/common.sh@336 -- # read -ra ver1
00:05:01.334    13:31:00 rpc_client -- scripts/common.sh@337 -- # IFS=.-:
00:05:01.334    13:31:00 rpc_client -- scripts/common.sh@337 -- # read -ra ver2
00:05:01.334    13:31:00 rpc_client -- scripts/common.sh@338 -- # local 'op=<'
00:05:01.334    13:31:00 rpc_client -- scripts/common.sh@340 -- # ver1_l=2
00:05:01.334    13:31:00 rpc_client -- scripts/common.sh@341 -- # ver2_l=1
00:05:01.334    13:31:00 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:01.334    13:31:00 rpc_client -- scripts/common.sh@344 -- # case "$op" in
00:05:01.334    13:31:00 rpc_client -- scripts/common.sh@345 -- # : 1
00:05:01.334    13:31:00 rpc_client -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:01.334    13:31:00 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:01.334     13:31:00 rpc_client -- scripts/common.sh@365 -- # decimal 1
00:05:01.334     13:31:00 rpc_client -- scripts/common.sh@353 -- # local d=1
00:05:01.334     13:31:00 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:01.334     13:31:00 rpc_client -- scripts/common.sh@355 -- # echo 1
00:05:01.334    13:31:00 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1
00:05:01.334     13:31:00 rpc_client -- scripts/common.sh@366 -- # decimal 2
00:05:01.334     13:31:00 rpc_client -- scripts/common.sh@353 -- # local d=2
00:05:01.334     13:31:00 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:01.334     13:31:00 rpc_client -- scripts/common.sh@355 -- # echo 2
00:05:01.334    13:31:00 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2
00:05:01.334    13:31:00 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:01.334    13:31:00 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:01.334    13:31:00 rpc_client -- scripts/common.sh@368 -- # return 0
00:05:01.334    13:31:00 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:01.334    13:31:00 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:05:01.334  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:01.334  		--rc genhtml_branch_coverage=1
00:05:01.334  		--rc genhtml_function_coverage=1
00:05:01.334  		--rc genhtml_legend=1
00:05:01.334  		--rc geninfo_all_blocks=1
00:05:01.334  		--rc geninfo_unexecuted_blocks=1
00:05:01.334  		
00:05:01.334  		'
00:05:01.334    13:31:00 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:05:01.334  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:01.334  		--rc genhtml_branch_coverage=1
00:05:01.334  		--rc genhtml_function_coverage=1
00:05:01.334  		--rc genhtml_legend=1
00:05:01.334  		--rc geninfo_all_blocks=1
00:05:01.334  		--rc geninfo_unexecuted_blocks=1
00:05:01.334  		
00:05:01.334  		'
00:05:01.334    13:31:00 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:05:01.334  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:01.334  		--rc genhtml_branch_coverage=1
00:05:01.334  		--rc genhtml_function_coverage=1
00:05:01.334  		--rc genhtml_legend=1
00:05:01.334  		--rc geninfo_all_blocks=1
00:05:01.334  		--rc geninfo_unexecuted_blocks=1
00:05:01.334  		
00:05:01.334  		'
00:05:01.334    13:31:00 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:05:01.334  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:01.334  		--rc genhtml_branch_coverage=1
00:05:01.334  		--rc genhtml_function_coverage=1
00:05:01.334  		--rc genhtml_legend=1
00:05:01.334  		--rc geninfo_all_blocks=1
00:05:01.334  		--rc geninfo_unexecuted_blocks=1
00:05:01.334  		
00:05:01.334  		'
00:05:01.334   13:31:00 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client_test
00:05:01.334  OK
00:05:01.334   13:31:00 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT
00:05:01.334  
00:05:01.334  real	0m0.218s
00:05:01.334  user	0m0.115s
00:05:01.334  sys	0m0.111s
00:05:01.334   13:31:00 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:01.334   13:31:00 rpc_client -- common/autotest_common.sh@10 -- # set +x
00:05:01.334  ************************************
00:05:01.334  END TEST rpc_client
00:05:01.334  ************************************
00:05:01.334   13:31:00  -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh
00:05:01.334   13:31:00  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:01.334   13:31:00  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:01.334   13:31:00  -- common/autotest_common.sh@10 -- # set +x
00:05:01.334  ************************************
00:05:01.334  START TEST json_config
00:05:01.334  ************************************
00:05:01.334   13:31:00 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh
00:05:01.334    13:31:01 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:05:01.334     13:31:01 json_config -- common/autotest_common.sh@1711 -- # lcov --version
00:05:01.334     13:31:01 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:05:01.594    13:31:01 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:05:01.594    13:31:01 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:01.594    13:31:01 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:01.594    13:31:01 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:01.594    13:31:01 json_config -- scripts/common.sh@336 -- # IFS=.-:
00:05:01.594    13:31:01 json_config -- scripts/common.sh@336 -- # read -ra ver1
00:05:01.594    13:31:01 json_config -- scripts/common.sh@337 -- # IFS=.-:
00:05:01.594    13:31:01 json_config -- scripts/common.sh@337 -- # read -ra ver2
00:05:01.594    13:31:01 json_config -- scripts/common.sh@338 -- # local 'op=<'
00:05:01.594    13:31:01 json_config -- scripts/common.sh@340 -- # ver1_l=2
00:05:01.594    13:31:01 json_config -- scripts/common.sh@341 -- # ver2_l=1
00:05:01.594    13:31:01 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:01.594    13:31:01 json_config -- scripts/common.sh@344 -- # case "$op" in
00:05:01.594    13:31:01 json_config -- scripts/common.sh@345 -- # : 1
00:05:01.594    13:31:01 json_config -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:01.594    13:31:01 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:01.594     13:31:01 json_config -- scripts/common.sh@365 -- # decimal 1
00:05:01.594     13:31:01 json_config -- scripts/common.sh@353 -- # local d=1
00:05:01.594     13:31:01 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:01.594     13:31:01 json_config -- scripts/common.sh@355 -- # echo 1
00:05:01.594    13:31:01 json_config -- scripts/common.sh@365 -- # ver1[v]=1
00:05:01.594     13:31:01 json_config -- scripts/common.sh@366 -- # decimal 2
00:05:01.594     13:31:01 json_config -- scripts/common.sh@353 -- # local d=2
00:05:01.594     13:31:01 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:01.594     13:31:01 json_config -- scripts/common.sh@355 -- # echo 2
00:05:01.594    13:31:01 json_config -- scripts/common.sh@366 -- # ver2[v]=2
00:05:01.594    13:31:01 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:01.594    13:31:01 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:01.594    13:31:01 json_config -- scripts/common.sh@368 -- # return 0
00:05:01.594    13:31:01 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:01.594    13:31:01 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:05:01.594  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:01.594  		--rc genhtml_branch_coverage=1
00:05:01.594  		--rc genhtml_function_coverage=1
00:05:01.594  		--rc genhtml_legend=1
00:05:01.594  		--rc geninfo_all_blocks=1
00:05:01.594  		--rc geninfo_unexecuted_blocks=1
00:05:01.594  		
00:05:01.594  		'
00:05:01.594    13:31:01 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:05:01.594  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:01.594  		--rc genhtml_branch_coverage=1
00:05:01.594  		--rc genhtml_function_coverage=1
00:05:01.594  		--rc genhtml_legend=1
00:05:01.594  		--rc geninfo_all_blocks=1
00:05:01.594  		--rc geninfo_unexecuted_blocks=1
00:05:01.594  		
00:05:01.594  		'
00:05:01.594    13:31:01 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:05:01.594  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:01.594  		--rc genhtml_branch_coverage=1
00:05:01.594  		--rc genhtml_function_coverage=1
00:05:01.594  		--rc genhtml_legend=1
00:05:01.594  		--rc geninfo_all_blocks=1
00:05:01.594  		--rc geninfo_unexecuted_blocks=1
00:05:01.594  		
00:05:01.594  		'
00:05:01.594    13:31:01 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:05:01.594  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:01.594  		--rc genhtml_branch_coverage=1
00:05:01.594  		--rc genhtml_function_coverage=1
00:05:01.594  		--rc genhtml_legend=1
00:05:01.594  		--rc geninfo_all_blocks=1
00:05:01.594  		--rc geninfo_unexecuted_blocks=1
00:05:01.594  		
00:05:01.594  		'
00:05:01.594   13:31:01 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh
00:05:01.594     13:31:01 json_config -- nvmf/common.sh@7 -- # uname -s
00:05:01.594    13:31:01 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:05:01.594    13:31:01 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:05:01.594    13:31:01 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:05:01.594    13:31:01 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:05:01.594    13:31:01 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:05:01.594    13:31:01 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:05:01.594    13:31:01 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:05:01.594    13:31:01 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:05:01.594    13:31:01 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:05:01.594     13:31:01 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:05:01.594    13:31:01 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:05:01.594    13:31:01 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e
00:05:01.594    13:31:01 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:05:01.594    13:31:01 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:05:01.594    13:31:01 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback
00:05:01.594    13:31:01 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:05:01.594    13:31:01 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh
00:05:01.594     13:31:01 json_config -- scripts/common.sh@15 -- # shopt -s extglob
00:05:01.594     13:31:01 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:05:01.594     13:31:01 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:05:01.594     13:31:01 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:05:01.594      13:31:01 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:05:01.594      13:31:01 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:05:01.594      13:31:01 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:05:01.594      13:31:01 json_config -- paths/export.sh@5 -- # export PATH
00:05:01.594      13:31:01 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:05:01.594    13:31:01 json_config -- nvmf/common.sh@51 -- # : 0
00:05:01.595    13:31:01 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:05:01.595    13:31:01 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:05:01.595    13:31:01 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:05:01.595    13:31:01 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:05:01.595    13:31:01 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:05:01.595    13:31:01 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:05:01.595  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:05:01.595    13:31:01 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:05:01.595    13:31:01 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:05:01.595    13:31:01 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0
00:05:01.595   13:31:01 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh
00:05:01.595   13:31:01 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]]
00:05:01.595   13:31:01 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]]
00:05:01.595   13:31:01 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]]
00:05:01.595   13:31:01 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + 	SPDK_TEST_ISCSI + 	SPDK_TEST_NVMF + 	SPDK_TEST_VHOST + 	SPDK_TEST_VHOST_INIT + 	SPDK_TEST_RBD == 0 ))
00:05:01.595   13:31:01 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='')
00:05:01.595   13:31:01 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid
00:05:01.595   13:31:01 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock')
00:05:01.595   13:31:01 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket
00:05:01.595   13:31:01 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024')
00:05:01.595   13:31:01 json_config -- json_config/json_config.sh@33 -- # declare -A app_params
00:05:01.595   13:31:01 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json')
00:05:01.595   13:31:01 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path
00:05:01.595   13:31:01 json_config -- json_config/json_config.sh@40 -- # last_event_id=0
00:05:01.595   13:31:01 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR
00:05:01.595   13:31:01 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init'
00:05:01.595  INFO: JSON configuration test init
00:05:01.595   13:31:01 json_config -- json_config/json_config.sh@364 -- # json_config_test_init
00:05:01.595   13:31:01 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init
00:05:01.595   13:31:01 json_config -- common/autotest_common.sh@726 -- # xtrace_disable
00:05:01.595   13:31:01 json_config -- common/autotest_common.sh@10 -- # set +x
00:05:01.595   13:31:01 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target
00:05:01.595   13:31:01 json_config -- common/autotest_common.sh@726 -- # xtrace_disable
00:05:01.595   13:31:01 json_config -- common/autotest_common.sh@10 -- # set +x
00:05:01.595   13:31:01 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc
00:05:01.595   13:31:01 json_config -- json_config/common.sh@9 -- # local app=target
00:05:01.595   13:31:01 json_config -- json_config/common.sh@10 -- # shift
00:05:01.595   13:31:01 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]]
00:05:01.595   13:31:01 json_config -- json_config/common.sh@13 -- # [[ -z '' ]]
00:05:01.595   13:31:01 json_config -- json_config/common.sh@15 -- # local app_extra_params=
00:05:01.595   13:31:01 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:05:01.595   13:31:01 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:05:01.595   13:31:01 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3116710
00:05:01.595   13:31:01 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...'
00:05:01.595  Waiting for target to run...
00:05:01.595   13:31:01 json_config -- json_config/common.sh@25 -- # waitforlisten 3116710 /var/tmp/spdk_tgt.sock
00:05:01.595   13:31:01 json_config -- common/autotest_common.sh@835 -- # '[' -z 3116710 ']'
00:05:01.595   13:31:01 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock
00:05:01.595   13:31:01 json_config -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:01.595   13:31:01 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc
00:05:01.595   13:31:01 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...'
00:05:01.595  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...
00:05:01.595   13:31:01 json_config -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:01.595   13:31:01 json_config -- common/autotest_common.sh@10 -- # set +x
00:05:01.595  [2024-12-14 13:31:01.250101] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:05:01.595  [2024-12-14 13:31:01.250207] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3116710 ]
00:05:02.163  [2024-12-14 13:31:01.597820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:02.163  [2024-12-14 13:31:01.687180] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:05:02.423   13:31:02 json_config -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:02.423   13:31:02 json_config -- common/autotest_common.sh@868 -- # return 0
00:05:02.423   13:31:02 json_config -- json_config/common.sh@26 -- # echo ''
00:05:02.423  
00:05:02.423   13:31:02 json_config -- json_config/json_config.sh@276 -- # create_accel_config
00:05:02.423   13:31:02 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config
00:05:02.423   13:31:02 json_config -- common/autotest_common.sh@726 -- # xtrace_disable
00:05:02.423   13:31:02 json_config -- common/autotest_common.sh@10 -- # set +x
00:05:02.423   13:31:02 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]]
00:05:02.423   13:31:02 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config
00:05:02.423   13:31:02 json_config -- common/autotest_common.sh@732 -- # xtrace_disable
00:05:02.423   13:31:02 json_config -- common/autotest_common.sh@10 -- # set +x
00:05:02.423   13:31:02 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems
00:05:02.423   13:31:02 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config
00:05:02.423   13:31:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config
00:05:06.619   13:31:05 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types
00:05:06.619   13:31:05 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types
00:05:06.619   13:31:05 json_config -- common/autotest_common.sh@726 -- # xtrace_disable
00:05:06.619   13:31:05 json_config -- common/autotest_common.sh@10 -- # set +x
00:05:06.619   13:31:05 json_config -- json_config/json_config.sh@45 -- # local ret=0
00:05:06.619   13:31:05 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister')
00:05:06.619   13:31:05 json_config -- json_config/json_config.sh@46 -- # local enabled_types
00:05:06.619   13:31:05 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]]
00:05:06.619   13:31:05 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister")
00:05:06.619    13:31:05 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types
00:05:06.619    13:31:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types
00:05:06.619    13:31:05 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]'
00:05:06.619   13:31:05 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister')
00:05:06.619   13:31:05 json_config -- json_config/json_config.sh@51 -- # local get_types
00:05:06.619   13:31:05 json_config -- json_config/json_config.sh@53 -- # local type_diff
00:05:06.619    13:31:05 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister
00:05:06.619    13:31:05 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n'
00:05:06.619    13:31:05 json_config -- json_config/json_config.sh@54 -- # sort
00:05:06.619    13:31:05 json_config -- json_config/json_config.sh@54 -- # uniq -u
00:05:06.619   13:31:05 json_config -- json_config/json_config.sh@54 -- # type_diff=
00:05:06.619   13:31:05 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]]
00:05:06.619   13:31:05 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types
00:05:06.619   13:31:05 json_config -- common/autotest_common.sh@732 -- # xtrace_disable
00:05:06.619   13:31:05 json_config -- common/autotest_common.sh@10 -- # set +x
00:05:06.619   13:31:05 json_config -- json_config/json_config.sh@62 -- # return 0
00:05:06.619   13:31:05 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]]
00:05:06.619   13:31:05 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]]
00:05:06.619   13:31:05 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]]
00:05:06.619   13:31:05 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]]
00:05:06.619   13:31:05 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config
00:05:06.619   13:31:05 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config
00:05:06.619   13:31:05 json_config -- common/autotest_common.sh@726 -- # xtrace_disable
00:05:06.619   13:31:05 json_config -- common/autotest_common.sh@10 -- # set +x
00:05:06.619   13:31:05 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1
00:05:06.619   13:31:05 json_config -- json_config/json_config.sh@240 -- # [[ rdma == \r\d\m\a ]]
00:05:06.619   13:31:05 json_config -- json_config/json_config.sh@241 -- # TEST_TRANSPORT=rdma
00:05:06.619   13:31:05 json_config -- json_config/json_config.sh@241 -- # nvmftestinit
00:05:06.619   13:31:05 json_config -- nvmf/common.sh@469 -- # '[' -z rdma ']'
00:05:06.619   13:31:05 json_config -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:05:06.619   13:31:05 json_config -- nvmf/common.sh@476 -- # prepare_net_devs
00:05:06.619   13:31:05 json_config -- nvmf/common.sh@438 -- # local -g is_hw=no
00:05:06.619   13:31:05 json_config -- nvmf/common.sh@440 -- # remove_spdk_ns
00:05:06.619   13:31:05 json_config -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:05:06.619   13:31:05 json_config -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null'
00:05:06.619    13:31:05 json_config -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:05:06.619   13:31:05 json_config -- nvmf/common.sh@442 -- # [[ phy-fallback != virt ]]
00:05:06.619   13:31:05 json_config -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:05:06.619   13:31:05 json_config -- nvmf/common.sh@309 -- # xtrace_disable
00:05:06.619   13:31:05 json_config -- common/autotest_common.sh@10 -- # set +x
00:05:13.247   13:31:12 json_config -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:05:13.247   13:31:12 json_config -- nvmf/common.sh@315 -- # pci_devs=()
00:05:13.247   13:31:12 json_config -- nvmf/common.sh@315 -- # local -a pci_devs
00:05:13.247   13:31:12 json_config -- nvmf/common.sh@316 -- # pci_net_devs=()
00:05:13.247   13:31:12 json_config -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:05:13.247   13:31:12 json_config -- nvmf/common.sh@317 -- # pci_drivers=()
00:05:13.247   13:31:12 json_config -- nvmf/common.sh@317 -- # local -A pci_drivers
00:05:13.247   13:31:12 json_config -- nvmf/common.sh@319 -- # net_devs=()
00:05:13.247   13:31:12 json_config -- nvmf/common.sh@319 -- # local -ga net_devs
00:05:13.247   13:31:12 json_config -- nvmf/common.sh@320 -- # e810=()
00:05:13.247   13:31:12 json_config -- nvmf/common.sh@320 -- # local -ga e810
00:05:13.247   13:31:12 json_config -- nvmf/common.sh@321 -- # x722=()
00:05:13.247   13:31:12 json_config -- nvmf/common.sh@321 -- # local -ga x722
00:05:13.247   13:31:12 json_config -- nvmf/common.sh@322 -- # mlx=()
00:05:13.247   13:31:12 json_config -- nvmf/common.sh@322 -- # local -ga mlx
00:05:13.247   13:31:12 json_config -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:05:13.247   13:31:12 json_config -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:05:13.247   13:31:12 json_config -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:05:13.247   13:31:12 json_config -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:05:13.247   13:31:12 json_config -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:05:13.247   13:31:12 json_config -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:05:13.247   13:31:12 json_config -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:05:13.247   13:31:12 json_config -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:05:13.247   13:31:12 json_config -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:05:13.247   13:31:12 json_config -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:05:13.247   13:31:12 json_config -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:05:13.247   13:31:12 json_config -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:05:13.247   13:31:12 json_config -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:05:13.247   13:31:12 json_config -- nvmf/common.sh@347 -- # [[ rdma == rdma ]]
00:05:13.247   13:31:12 json_config -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}")
00:05:13.247   13:31:12 json_config -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}")
00:05:13.247   13:31:12 json_config -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]]
00:05:13.247   13:31:12 json_config -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}")
00:05:13.247   13:31:12 json_config -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:05:13.247   13:31:12 json_config -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:05:13.248   13:31:12 json_config -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)'
00:05:13.248  Found 0000:d9:00.0 (0x15b3 - 0x1015)
00:05:13.248   13:31:12 json_config -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:05:13.248   13:31:12 json_config -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:05:13.248   13:31:12 json_config -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:05:13.248   13:31:12 json_config -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:05:13.248   13:31:12 json_config -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:05:13.248   13:31:12 json_config -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:05:13.248   13:31:12 json_config -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:05:13.248   13:31:12 json_config -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)'
00:05:13.248  Found 0000:d9:00.1 (0x15b3 - 0x1015)
00:05:13.248   13:31:12 json_config -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:05:13.248   13:31:12 json_config -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:05:13.248   13:31:12 json_config -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:05:13.248   13:31:12 json_config -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:05:13.248   13:31:12 json_config -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:05:13.248   13:31:12 json_config -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:05:13.248   13:31:12 json_config -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:05:13.248   13:31:12 json_config -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]]
00:05:13.248   13:31:12 json_config -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:05:13.248   13:31:12 json_config -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:05:13.248   13:31:12 json_config -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:05:13.248   13:31:12 json_config -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:05:13.248   13:31:12 json_config -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:05:13.248   13:31:12 json_config -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0'
00:05:13.248  Found net devices under 0000:d9:00.0: mlx_0_0
00:05:13.248   13:31:12 json_config -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:05:13.248   13:31:12 json_config -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:05:13.248   13:31:12 json_config -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:05:13.248   13:31:12 json_config -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:05:13.248   13:31:12 json_config -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:05:13.248   13:31:12 json_config -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:05:13.248   13:31:12 json_config -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1'
00:05:13.248  Found net devices under 0000:d9:00.1: mlx_0_1
00:05:13.248   13:31:12 json_config -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:05:13.248   13:31:12 json_config -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:05:13.248   13:31:12 json_config -- nvmf/common.sh@442 -- # is_hw=yes
00:05:13.248   13:31:12 json_config -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:05:13.248   13:31:12 json_config -- nvmf/common.sh@445 -- # [[ rdma == tcp ]]
00:05:13.248   13:31:12 json_config -- nvmf/common.sh@447 -- # [[ rdma == rdma ]]
00:05:13.248   13:31:12 json_config -- nvmf/common.sh@448 -- # rdma_device_init
00:05:13.248   13:31:12 json_config -- nvmf/common.sh@529 -- # load_ib_rdma_modules
00:05:13.248    13:31:12 json_config -- nvmf/common.sh@62 -- # uname
00:05:13.248   13:31:12 json_config -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']'
00:05:13.248   13:31:12 json_config -- nvmf/common.sh@66 -- # modprobe ib_cm
00:05:13.248   13:31:12 json_config -- nvmf/common.sh@67 -- # modprobe ib_core
00:05:13.248   13:31:12 json_config -- nvmf/common.sh@68 -- # modprobe ib_umad
00:05:13.248   13:31:12 json_config -- nvmf/common.sh@69 -- # modprobe ib_uverbs
00:05:13.248   13:31:12 json_config -- nvmf/common.sh@70 -- # modprobe iw_cm
00:05:13.248   13:31:12 json_config -- nvmf/common.sh@71 -- # modprobe rdma_cm
00:05:13.248   13:31:12 json_config -- nvmf/common.sh@72 -- # modprobe rdma_ucm
00:05:13.248   13:31:12 json_config -- nvmf/common.sh@530 -- # allocate_nic_ips
00:05:13.248   13:31:12 json_config -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR ))
00:05:13.248    13:31:12 json_config -- nvmf/common.sh@77 -- # get_rdma_if_list
00:05:13.248    13:31:12 json_config -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:05:13.248    13:31:12 json_config -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:05:13.248     13:31:12 json_config -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:05:13.248     13:31:12 json_config -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:05:13.248    13:31:12 json_config -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:05:13.248    13:31:12 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:05:13.248    13:31:12 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:05:13.248    13:31:12 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:05:13.248    13:31:12 json_config -- nvmf/common.sh@108 -- # echo mlx_0_0
00:05:13.248    13:31:12 json_config -- nvmf/common.sh@109 -- # continue 2
00:05:13.248    13:31:12 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:05:13.248    13:31:12 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:05:13.248    13:31:12 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:05:13.248    13:31:12 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:05:13.248    13:31:12 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:05:13.248    13:31:12 json_config -- nvmf/common.sh@108 -- # echo mlx_0_1
00:05:13.248    13:31:12 json_config -- nvmf/common.sh@109 -- # continue 2
00:05:13.248   13:31:12 json_config -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:05:13.248    13:31:12 json_config -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0
00:05:13.248    13:31:12 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:05:13.248    13:31:12 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:05:13.248    13:31:12 json_config -- nvmf/common.sh@117 -- # awk '{print $4}'
00:05:13.248    13:31:12 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1
00:05:13.248   13:31:12 json_config -- nvmf/common.sh@78 -- # ip=192.168.100.8
00:05:13.248   13:31:12 json_config -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]]
00:05:13.248   13:31:12 json_config -- nvmf/common.sh@85 -- # ip addr show mlx_0_0
00:05:13.248  6: mlx_0_0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:05:13.248      link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff
00:05:13.248      altname enp217s0f0np0
00:05:13.248      altname ens818f0np0
00:05:13.248      inet 192.168.100.8/24 scope global mlx_0_0
00:05:13.248         valid_lft forever preferred_lft forever
00:05:13.248   13:31:12 json_config -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:05:13.248    13:31:12 json_config -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1
00:05:13.248    13:31:12 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:05:13.248    13:31:12 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:05:13.248    13:31:12 json_config -- nvmf/common.sh@117 -- # awk '{print $4}'
00:05:13.248    13:31:12 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1
00:05:13.248   13:31:12 json_config -- nvmf/common.sh@78 -- # ip=192.168.100.9
00:05:13.248   13:31:12 json_config -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]]
00:05:13.248   13:31:12 json_config -- nvmf/common.sh@85 -- # ip addr show mlx_0_1
00:05:13.248  7: mlx_0_1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:05:13.248      link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff
00:05:13.248      altname enp217s0f1np1
00:05:13.248      altname ens818f1np1
00:05:13.248      inet 192.168.100.9/24 scope global mlx_0_1
00:05:13.248         valid_lft forever preferred_lft forever
00:05:13.248   13:31:12 json_config -- nvmf/common.sh@450 -- # return 0
00:05:13.248   13:31:12 json_config -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:05:13.248   13:31:12 json_config -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma'
00:05:13.248   13:31:12 json_config -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]]
00:05:13.248    13:31:12 json_config -- nvmf/common.sh@484 -- # get_available_rdma_ips
00:05:13.248     13:31:12 json_config -- nvmf/common.sh@90 -- # get_rdma_if_list
00:05:13.248     13:31:12 json_config -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:05:13.248     13:31:12 json_config -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:05:13.248      13:31:12 json_config -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:05:13.248      13:31:12 json_config -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:05:13.248     13:31:12 json_config -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:05:13.248     13:31:12 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:05:13.248     13:31:12 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:05:13.248     13:31:12 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:05:13.248     13:31:12 json_config -- nvmf/common.sh@108 -- # echo mlx_0_0
00:05:13.248     13:31:12 json_config -- nvmf/common.sh@109 -- # continue 2
00:05:13.248     13:31:12 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:05:13.248     13:31:12 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:05:13.248     13:31:12 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:05:13.248     13:31:12 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:05:13.248     13:31:12 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:05:13.248     13:31:12 json_config -- nvmf/common.sh@108 -- # echo mlx_0_1
00:05:13.248     13:31:12 json_config -- nvmf/common.sh@109 -- # continue 2
00:05:13.248    13:31:12 json_config -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:05:13.248    13:31:12 json_config -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0
00:05:13.248    13:31:12 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:05:13.248    13:31:12 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:05:13.248    13:31:12 json_config -- nvmf/common.sh@117 -- # awk '{print $4}'
00:05:13.248    13:31:12 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1
00:05:13.248    13:31:12 json_config -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:05:13.248    13:31:12 json_config -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1
00:05:13.248    13:31:12 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:05:13.248    13:31:12 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:05:13.248    13:31:12 json_config -- nvmf/common.sh@117 -- # awk '{print $4}'
00:05:13.248    13:31:12 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1
00:05:13.248   13:31:12 json_config -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8
00:05:13.248  192.168.100.9'
00:05:13.248    13:31:12 json_config -- nvmf/common.sh@485 -- # echo '192.168.100.8
00:05:13.248  192.168.100.9'
00:05:13.248    13:31:12 json_config -- nvmf/common.sh@485 -- # head -n 1
00:05:13.248   13:31:12 json_config -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8
00:05:13.248    13:31:12 json_config -- nvmf/common.sh@486 -- # echo '192.168.100.8
00:05:13.248  192.168.100.9'
00:05:13.248    13:31:12 json_config -- nvmf/common.sh@486 -- # tail -n +2
00:05:13.248    13:31:12 json_config -- nvmf/common.sh@486 -- # head -n 1
00:05:13.248   13:31:12 json_config -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9
00:05:13.248   13:31:12 json_config -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']'
00:05:13.248   13:31:12 json_config -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024'
00:05:13.248   13:31:12 json_config -- nvmf/common.sh@496 -- # '[' rdma == tcp ']'
00:05:13.249   13:31:12 json_config -- nvmf/common.sh@496 -- # '[' rdma == rdma ']'
00:05:13.249   13:31:12 json_config -- nvmf/common.sh@502 -- # modprobe nvme-rdma
00:05:13.249   13:31:12 json_config -- json_config/json_config.sh@244 -- # [[ -z 192.168.100.8 ]]
00:05:13.249   13:31:12 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0
00:05:13.249   13:31:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0
00:05:13.508  MallocForNvmf0
00:05:13.508   13:31:13 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1
00:05:13.508   13:31:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1
00:05:13.767  MallocForNvmf1
00:05:13.767   13:31:13 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0
00:05:13.767   13:31:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0
00:05:13.767  [2024-12-14 13:31:13.470856] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16
00:05:14.026  [2024-12-14 13:31:13.507145] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f181deba940) succeed.
00:05:14.026  [2024-12-14 13:31:13.519791] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f181de76940) succeed.
00:05:14.026   13:31:13 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:05:14.026   13:31:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:05:14.026   13:31:13 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0
00:05:14.026   13:31:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0
00:05:14.285   13:31:13 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1
00:05:14.285   13:31:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1
00:05:14.544   13:31:14 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420
00:05:14.544   13:31:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420
00:05:14.544  [2024-12-14 13:31:14.276805] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 ***
00:05:14.803   13:31:14 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config
00:05:14.803   13:31:14 json_config -- common/autotest_common.sh@732 -- # xtrace_disable
00:05:14.803   13:31:14 json_config -- common/autotest_common.sh@10 -- # set +x
00:05:14.803   13:31:14 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target
00:05:14.803   13:31:14 json_config -- common/autotest_common.sh@732 -- # xtrace_disable
00:05:14.803   13:31:14 json_config -- common/autotest_common.sh@10 -- # set +x
00:05:14.803   13:31:14 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]]
00:05:14.803   13:31:14 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck
00:05:14.803   13:31:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck
00:05:15.062  MallocBdevForConfigChangeCheck
00:05:15.062   13:31:14 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init
00:05:15.062   13:31:14 json_config -- common/autotest_common.sh@732 -- # xtrace_disable
00:05:15.062   13:31:14 json_config -- common/autotest_common.sh@10 -- # set +x
00:05:15.062   13:31:14 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config
00:05:15.062   13:31:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config
00:05:15.321   13:31:14 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...'
00:05:15.321  INFO: shutting down applications...
00:05:15.321   13:31:14 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]]
00:05:15.321   13:31:14 json_config -- json_config/json_config.sh@375 -- # json_config_clear target
00:05:15.321   13:31:14 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]]
00:05:15.321   13:31:14 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config
00:05:17.854  Calling clear_iscsi_subsystem
00:05:17.854  Calling clear_nvmf_subsystem
00:05:17.854  Calling clear_nbd_subsystem
00:05:17.854  Calling clear_ublk_subsystem
00:05:17.854  Calling clear_vhost_blk_subsystem
00:05:17.854  Calling clear_vhost_scsi_subsystem
00:05:17.854  Calling clear_bdev_subsystem
00:05:17.854   13:31:17 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py
00:05:17.854   13:31:17 json_config -- json_config/json_config.sh@350 -- # count=100
00:05:17.854   13:31:17 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']'
00:05:17.854   13:31:17 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config
00:05:17.854   13:31:17 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters
00:05:17.854   13:31:17 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty
00:05:18.422   13:31:17 json_config -- json_config/json_config.sh@352 -- # break
00:05:18.422   13:31:17 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']'
00:05:18.422   13:31:17 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target
00:05:18.422   13:31:17 json_config -- json_config/common.sh@31 -- # local app=target
00:05:18.422   13:31:17 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]]
00:05:18.422   13:31:17 json_config -- json_config/common.sh@35 -- # [[ -n 3116710 ]]
00:05:18.422   13:31:17 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3116710
00:05:18.422   13:31:17 json_config -- json_config/common.sh@40 -- # (( i = 0 ))
00:05:18.422   13:31:17 json_config -- json_config/common.sh@40 -- # (( i < 30 ))
00:05:18.422   13:31:17 json_config -- json_config/common.sh@41 -- # kill -0 3116710
00:05:18.422   13:31:17 json_config -- json_config/common.sh@45 -- # sleep 0.5
00:05:18.682   13:31:18 json_config -- json_config/common.sh@40 -- # (( i++ ))
00:05:18.682   13:31:18 json_config -- json_config/common.sh@40 -- # (( i < 30 ))
00:05:18.682   13:31:18 json_config -- json_config/common.sh@41 -- # kill -0 3116710
00:05:18.682   13:31:18 json_config -- json_config/common.sh@45 -- # sleep 0.5
00:05:19.250   13:31:18 json_config -- json_config/common.sh@40 -- # (( i++ ))
00:05:19.250   13:31:18 json_config -- json_config/common.sh@40 -- # (( i < 30 ))
00:05:19.250   13:31:18 json_config -- json_config/common.sh@41 -- # kill -0 3116710
00:05:19.250   13:31:18 json_config -- json_config/common.sh@42 -- # app_pid["$app"]=
00:05:19.250   13:31:18 json_config -- json_config/common.sh@43 -- # break
00:05:19.250   13:31:18 json_config -- json_config/common.sh@48 -- # [[ -n '' ]]
00:05:19.250   13:31:18 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done'
00:05:19.250  SPDK target shutdown done
00:05:19.250   13:31:18 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...'
00:05:19.250  INFO: relaunching applications...
00:05:19.250   13:31:18 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json
00:05:19.250   13:31:18 json_config -- json_config/common.sh@9 -- # local app=target
00:05:19.250   13:31:18 json_config -- json_config/common.sh@10 -- # shift
00:05:19.250   13:31:18 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]]
00:05:19.250   13:31:18 json_config -- json_config/common.sh@13 -- # [[ -z '' ]]
00:05:19.250   13:31:18 json_config -- json_config/common.sh@15 -- # local app_extra_params=
00:05:19.250   13:31:18 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:05:19.250   13:31:18 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:05:19.250   13:31:18 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3121821
00:05:19.250   13:31:18 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...'
00:05:19.250  Waiting for target to run...
00:05:19.250   13:31:18 json_config -- json_config/common.sh@25 -- # waitforlisten 3121821 /var/tmp/spdk_tgt.sock
00:05:19.250   13:31:18 json_config -- common/autotest_common.sh@835 -- # '[' -z 3121821 ']'
00:05:19.250   13:31:18 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock
00:05:19.250   13:31:18 json_config -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:19.250   13:31:18 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...'
00:05:19.250  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...
00:05:19.250   13:31:18 json_config -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:19.250   13:31:18 json_config -- common/autotest_common.sh@10 -- # set +x
00:05:19.250   13:31:18 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json
00:05:19.250  [2024-12-14 13:31:18.966104] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:05:19.251  [2024-12-14 13:31:18.966206] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3121821 ]
00:05:19.819  [2024-12-14 13:31:19.317421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:19.819  [2024-12-14 13:31:19.409529] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:05:24.011  [2024-12-14 13:31:23.051509] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000298c0/0x7f6747e8f940) succeed.
00:05:24.011  [2024-12-14 13:31:23.062924] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000029a40/0x7f6747e4b940) succeed.
00:05:24.011  [2024-12-14 13:31:23.124738] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 ***
00:05:24.011   13:31:23 json_config -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:24.011   13:31:23 json_config -- common/autotest_common.sh@868 -- # return 0
00:05:24.011   13:31:23 json_config -- json_config/common.sh@26 -- # echo ''
00:05:24.011  
00:05:24.011   13:31:23 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]]
00:05:24.011   13:31:23 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...'
00:05:24.011  INFO: Checking if target configuration is the same...
00:05:24.011   13:31:23 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json
00:05:24.011    13:31:23 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config
00:05:24.011    13:31:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config
00:05:24.011  + '[' 2 -ne 2 ']'
00:05:24.011  +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh
00:05:24.011  ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../..
00:05:24.011  + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk
00:05:24.011  +++ basename /dev/fd/62
00:05:24.011  ++ mktemp /tmp/62.XXX
00:05:24.011  + tmp_file_1=/tmp/62.w0U
00:05:24.011  +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json
00:05:24.011  ++ mktemp /tmp/spdk_tgt_config.json.XXX
00:05:24.011  + tmp_file_2=/tmp/spdk_tgt_config.json.mjb
00:05:24.011  + ret=0
00:05:24.011  + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort
00:05:24.011  + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort
00:05:24.011  + diff -u /tmp/62.w0U /tmp/spdk_tgt_config.json.mjb
00:05:24.011  + echo 'INFO: JSON config files are the same'
00:05:24.011  INFO: JSON config files are the same
00:05:24.011  + rm /tmp/62.w0U /tmp/spdk_tgt_config.json.mjb
00:05:24.011  + exit 0
00:05:24.011   13:31:23 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]]
00:05:24.011   13:31:23 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...'
00:05:24.011  INFO: changing configuration and checking if this can be detected...
00:05:24.011   13:31:23 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck
00:05:24.011   13:31:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck
00:05:24.011    13:31:23 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config
00:05:24.011    13:31:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config
00:05:24.011   13:31:23 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json
00:05:24.271  + '[' 2 -ne 2 ']'
00:05:24.271  +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh
00:05:24.271  ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../..
00:05:24.271  + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk
00:05:24.271  +++ basename /dev/fd/62
00:05:24.271  ++ mktemp /tmp/62.XXX
00:05:24.271  + tmp_file_1=/tmp/62.bHC
00:05:24.271  +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json
00:05:24.271  ++ mktemp /tmp/spdk_tgt_config.json.XXX
00:05:24.271  + tmp_file_2=/tmp/spdk_tgt_config.json.WOc
00:05:24.271  + ret=0
00:05:24.271  + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort
00:05:24.530  + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort
00:05:24.530  + diff -u /tmp/62.bHC /tmp/spdk_tgt_config.json.WOc
00:05:24.530  + ret=1
00:05:24.530  + echo '=== Start of file: /tmp/62.bHC ==='
00:05:24.530  + cat /tmp/62.bHC
00:05:24.530  + echo '=== End of file: /tmp/62.bHC ==='
00:05:24.530  + echo ''
00:05:24.530  + echo '=== Start of file: /tmp/spdk_tgt_config.json.WOc ==='
00:05:24.530  + cat /tmp/spdk_tgt_config.json.WOc
00:05:24.530  + echo '=== End of file: /tmp/spdk_tgt_config.json.WOc ==='
00:05:24.530  + echo ''
00:05:24.530  + rm /tmp/62.bHC /tmp/spdk_tgt_config.json.WOc
00:05:24.530  + exit 1
00:05:24.530   13:31:24 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.'
00:05:24.530  INFO: configuration change detected.
00:05:24.530   13:31:24 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini
00:05:24.530   13:31:24 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini
00:05:24.530   13:31:24 json_config -- common/autotest_common.sh@726 -- # xtrace_disable
00:05:24.530   13:31:24 json_config -- common/autotest_common.sh@10 -- # set +x
00:05:24.530   13:31:24 json_config -- json_config/json_config.sh@314 -- # local ret=0
00:05:24.530   13:31:24 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]]
00:05:24.530   13:31:24 json_config -- json_config/json_config.sh@324 -- # [[ -n 3121821 ]]
00:05:24.530   13:31:24 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config
00:05:24.530   13:31:24 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config
00:05:24.530   13:31:24 json_config -- common/autotest_common.sh@726 -- # xtrace_disable
00:05:24.530   13:31:24 json_config -- common/autotest_common.sh@10 -- # set +x
00:05:24.530   13:31:24 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]]
00:05:24.530    13:31:24 json_config -- json_config/json_config.sh@200 -- # uname -s
00:05:24.530   13:31:24 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]]
00:05:24.530   13:31:24 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio
00:05:24.530   13:31:24 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]]
00:05:24.530   13:31:24 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config
00:05:24.530   13:31:24 json_config -- common/autotest_common.sh@732 -- # xtrace_disable
00:05:24.530   13:31:24 json_config -- common/autotest_common.sh@10 -- # set +x
00:05:24.530   13:31:24 json_config -- json_config/json_config.sh@330 -- # killprocess 3121821
00:05:24.530   13:31:24 json_config -- common/autotest_common.sh@954 -- # '[' -z 3121821 ']'
00:05:24.530   13:31:24 json_config -- common/autotest_common.sh@958 -- # kill -0 3121821
00:05:24.530    13:31:24 json_config -- common/autotest_common.sh@959 -- # uname
00:05:24.530   13:31:24 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:24.530    13:31:24 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3121821
00:05:24.790   13:31:24 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:05:24.790   13:31:24 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:05:24.790   13:31:24 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3121821'
00:05:24.790  killing process with pid 3121821
00:05:24.790   13:31:24 json_config -- common/autotest_common.sh@973 -- # kill 3121821
00:05:24.790   13:31:24 json_config -- common/autotest_common.sh@978 -- # wait 3121821
00:05:28.081   13:31:27 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json
00:05:28.081   13:31:27 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini
00:05:28.081   13:31:27 json_config -- common/autotest_common.sh@732 -- # xtrace_disable
00:05:28.081   13:31:27 json_config -- common/autotest_common.sh@10 -- # set +x
00:05:28.081   13:31:27 json_config -- json_config/json_config.sh@335 -- # return 0
00:05:28.081   13:31:27 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success'
00:05:28.081  INFO: Success
00:05:28.081   13:31:27 json_config -- json_config/json_config.sh@1 -- # nvmftestfini
00:05:28.081   13:31:27 json_config -- nvmf/common.sh@516 -- # nvmfcleanup
00:05:28.081   13:31:27 json_config -- nvmf/common.sh@121 -- # sync
00:05:28.081   13:31:27 json_config -- nvmf/common.sh@123 -- # '[' '' == tcp ']'
00:05:28.081   13:31:27 json_config -- nvmf/common.sh@123 -- # '[' '' == rdma ']'
00:05:28.081   13:31:27 json_config -- nvmf/common.sh@517 -- # '[' -n '' ']'
00:05:28.081   13:31:27 json_config -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:05:28.081   13:31:27 json_config -- nvmf/common.sh@523 -- # [[ '' == \t\c\p ]]
00:05:28.081  
00:05:28.081  real	0m26.501s
00:05:28.081  user	0m28.762s
00:05:28.081  sys	0m8.275s
00:05:28.081   13:31:27 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:28.081   13:31:27 json_config -- common/autotest_common.sh@10 -- # set +x
00:05:28.081  ************************************
00:05:28.081  END TEST json_config
00:05:28.081  ************************************
00:05:28.081   13:31:27  -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh
00:05:28.081   13:31:27  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:28.081   13:31:27  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:28.081   13:31:27  -- common/autotest_common.sh@10 -- # set +x
00:05:28.081  ************************************
00:05:28.081  START TEST json_config_extra_key
00:05:28.081  ************************************
00:05:28.081   13:31:27 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh
00:05:28.081    13:31:27 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:05:28.081     13:31:27 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version
00:05:28.081     13:31:27 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:05:28.081    13:31:27 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:05:28.081    13:31:27 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:28.081    13:31:27 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:28.081    13:31:27 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:28.081    13:31:27 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-:
00:05:28.081    13:31:27 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1
00:05:28.081    13:31:27 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-:
00:05:28.081    13:31:27 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2
00:05:28.081    13:31:27 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<'
00:05:28.081    13:31:27 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2
00:05:28.081    13:31:27 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1
00:05:28.081    13:31:27 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:28.081    13:31:27 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in
00:05:28.081    13:31:27 json_config_extra_key -- scripts/common.sh@345 -- # : 1
00:05:28.081    13:31:27 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:28.081    13:31:27 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:28.081     13:31:27 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1
00:05:28.082     13:31:27 json_config_extra_key -- scripts/common.sh@353 -- # local d=1
00:05:28.082     13:31:27 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:28.082     13:31:27 json_config_extra_key -- scripts/common.sh@355 -- # echo 1
00:05:28.082    13:31:27 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1
00:05:28.082     13:31:27 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2
00:05:28.082     13:31:27 json_config_extra_key -- scripts/common.sh@353 -- # local d=2
00:05:28.082     13:31:27 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:28.082     13:31:27 json_config_extra_key -- scripts/common.sh@355 -- # echo 2
00:05:28.082    13:31:27 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2
00:05:28.082    13:31:27 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:28.082    13:31:27 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:28.082    13:31:27 json_config_extra_key -- scripts/common.sh@368 -- # return 0
00:05:28.082    13:31:27 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:28.082    13:31:27 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:05:28.082  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:28.082  		--rc genhtml_branch_coverage=1
00:05:28.082  		--rc genhtml_function_coverage=1
00:05:28.082  		--rc genhtml_legend=1
00:05:28.082  		--rc geninfo_all_blocks=1
00:05:28.082  		--rc geninfo_unexecuted_blocks=1
00:05:28.082  		
00:05:28.082  		'
00:05:28.082    13:31:27 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:05:28.082  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:28.082  		--rc genhtml_branch_coverage=1
00:05:28.082  		--rc genhtml_function_coverage=1
00:05:28.082  		--rc genhtml_legend=1
00:05:28.082  		--rc geninfo_all_blocks=1
00:05:28.082  		--rc geninfo_unexecuted_blocks=1
00:05:28.082  		
00:05:28.082  		'
00:05:28.082    13:31:27 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:05:28.082  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:28.082  		--rc genhtml_branch_coverage=1
00:05:28.082  		--rc genhtml_function_coverage=1
00:05:28.082  		--rc genhtml_legend=1
00:05:28.082  		--rc geninfo_all_blocks=1
00:05:28.082  		--rc geninfo_unexecuted_blocks=1
00:05:28.082  		
00:05:28.082  		'
00:05:28.082    13:31:27 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:05:28.082  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:28.082  		--rc genhtml_branch_coverage=1
00:05:28.082  		--rc genhtml_function_coverage=1
00:05:28.082  		--rc genhtml_legend=1
00:05:28.082  		--rc geninfo_all_blocks=1
00:05:28.082  		--rc geninfo_unexecuted_blocks=1
00:05:28.082  		
00:05:28.082  		'
00:05:28.082   13:31:27 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh
00:05:28.082     13:31:27 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s
00:05:28.082    13:31:27 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:05:28.082    13:31:27 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:05:28.082    13:31:27 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:05:28.082    13:31:27 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:05:28.082    13:31:27 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:05:28.082    13:31:27 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:05:28.082    13:31:27 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:05:28.082    13:31:27 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:05:28.082    13:31:27 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:05:28.082     13:31:27 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:05:28.082    13:31:27 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:05:28.082    13:31:27 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e
00:05:28.082    13:31:27 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:05:28.082    13:31:27 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:05:28.082    13:31:27 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback
00:05:28.082    13:31:27 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:05:28.082    13:31:27 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh
00:05:28.082     13:31:27 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob
00:05:28.082     13:31:27 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:05:28.082     13:31:27 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:05:28.082     13:31:27 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:05:28.082      13:31:27 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:05:28.082      13:31:27 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:05:28.082      13:31:27 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:05:28.082      13:31:27 json_config_extra_key -- paths/export.sh@5 -- # export PATH
00:05:28.082      13:31:27 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:05:28.082    13:31:27 json_config_extra_key -- nvmf/common.sh@51 -- # : 0
00:05:28.082    13:31:27 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:05:28.082    13:31:27 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:05:28.082    13:31:27 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:05:28.082    13:31:27 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:05:28.082    13:31:27 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:05:28.082    13:31:27 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:05:28.082  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:05:28.082    13:31:27 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:05:28.082    13:31:27 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:05:28.082    13:31:27 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0
00:05:28.082   13:31:27 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh
00:05:28.082   13:31:27 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='')
00:05:28.082   13:31:27 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid
00:05:28.082   13:31:27 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock')
00:05:28.082   13:31:27 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket
00:05:28.082   13:31:27 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024')
00:05:28.082   13:31:27 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params
00:05:28.082   13:31:27 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json')
00:05:28.082   13:31:27 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path
00:05:28.082   13:31:27 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR
00:05:28.082   13:31:27 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...'
00:05:28.082  INFO: launching applications...
00:05:28.082   13:31:27 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json
00:05:28.082   13:31:27 json_config_extra_key -- json_config/common.sh@9 -- # local app=target
00:05:28.082   13:31:27 json_config_extra_key -- json_config/common.sh@10 -- # shift
00:05:28.082   13:31:27 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]]
00:05:28.082   13:31:27 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]]
00:05:28.082   13:31:27 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params=
00:05:28.082   13:31:27 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:05:28.082   13:31:27 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:05:28.082   13:31:27 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3123542
00:05:28.082   13:31:27 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...'
00:05:28.082  Waiting for target to run...
00:05:28.082   13:31:27 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3123542 /var/tmp/spdk_tgt.sock
00:05:28.082   13:31:27 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 3123542 ']'
00:05:28.082   13:31:27 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json
00:05:28.082   13:31:27 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock
00:05:28.082   13:31:27 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:28.082   13:31:27 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...'
00:05:28.082  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...
00:05:28.082   13:31:27 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:28.082   13:31:27 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x
00:05:28.342  [2024-12-14 13:31:27.845279] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:05:28.342  [2024-12-14 13:31:27.845376] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3123542 ]
00:05:28.910  [2024-12-14 13:31:28.354551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:28.910  [2024-12-14 13:31:28.452187] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:05:29.478   13:31:29 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:29.478   13:31:29 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0
00:05:29.478   13:31:29 json_config_extra_key -- json_config/common.sh@26 -- # echo ''
00:05:29.478  
00:05:29.478   13:31:29 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...'
00:05:29.478  INFO: shutting down applications...
00:05:29.478   13:31:29 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target
00:05:29.478   13:31:29 json_config_extra_key -- json_config/common.sh@31 -- # local app=target
00:05:29.478   13:31:29 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]]
00:05:29.478   13:31:29 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3123542 ]]
00:05:29.478   13:31:29 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3123542
00:05:29.478   13:31:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 ))
00:05:29.478   13:31:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:05:29.478   13:31:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3123542
00:05:29.478   13:31:29 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5
00:05:30.047   13:31:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ ))
00:05:30.047   13:31:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:05:30.047   13:31:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3123542
00:05:30.047   13:31:29 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5
00:05:30.678   13:31:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ ))
00:05:30.678   13:31:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:05:30.678   13:31:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3123542
00:05:30.678   13:31:30 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5
00:05:30.938   13:31:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ ))
00:05:30.938   13:31:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:05:30.938   13:31:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3123542
00:05:30.938   13:31:30 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5
00:05:31.505   13:31:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ ))
00:05:31.505   13:31:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:05:31.506   13:31:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3123542
00:05:31.506   13:31:31 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5
00:05:32.073   13:31:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ ))
00:05:32.073   13:31:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:05:32.073   13:31:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3123542
00:05:32.073   13:31:31 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5
00:05:32.641   13:31:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ ))
00:05:32.641   13:31:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:05:32.641   13:31:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3123542
00:05:32.641   13:31:32 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]=
00:05:32.641   13:31:32 json_config_extra_key -- json_config/common.sh@43 -- # break
00:05:32.641   13:31:32 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]]
00:05:32.641   13:31:32 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done'
00:05:32.641  SPDK target shutdown done
00:05:32.641   13:31:32 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success
00:05:32.641  Success
00:05:32.641  
00:05:32.641  real	0m4.545s
00:05:32.641  user	0m3.644s
00:05:32.641  sys	0m0.780s
00:05:32.641   13:31:32 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:32.641   13:31:32 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x
00:05:32.641  ************************************
00:05:32.641  END TEST json_config_extra_key
00:05:32.641  ************************************
00:05:32.641   13:31:32  -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh
00:05:32.641   13:31:32  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:32.641   13:31:32  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:32.641   13:31:32  -- common/autotest_common.sh@10 -- # set +x
00:05:32.641  ************************************
00:05:32.641  START TEST alias_rpc
00:05:32.641  ************************************
00:05:32.641   13:31:32 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh
00:05:32.641  * Looking for test storage...
00:05:32.641  * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc
00:05:32.641    13:31:32 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:05:32.641     13:31:32 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version
00:05:32.641     13:31:32 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:05:32.641    13:31:32 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:05:32.641    13:31:32 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:32.641    13:31:32 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:32.641    13:31:32 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:32.641    13:31:32 alias_rpc -- scripts/common.sh@336 -- # IFS=.-:
00:05:32.641    13:31:32 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1
00:05:32.641    13:31:32 alias_rpc -- scripts/common.sh@337 -- # IFS=.-:
00:05:32.641    13:31:32 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2
00:05:32.641    13:31:32 alias_rpc -- scripts/common.sh@338 -- # local 'op=<'
00:05:32.641    13:31:32 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2
00:05:32.641    13:31:32 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1
00:05:32.641    13:31:32 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:32.641    13:31:32 alias_rpc -- scripts/common.sh@344 -- # case "$op" in
00:05:32.641    13:31:32 alias_rpc -- scripts/common.sh@345 -- # : 1
00:05:32.641    13:31:32 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:32.641    13:31:32 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:32.641     13:31:32 alias_rpc -- scripts/common.sh@365 -- # decimal 1
00:05:32.641     13:31:32 alias_rpc -- scripts/common.sh@353 -- # local d=1
00:05:32.641     13:31:32 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:32.641     13:31:32 alias_rpc -- scripts/common.sh@355 -- # echo 1
00:05:32.641    13:31:32 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:05:32.641     13:31:32 alias_rpc -- scripts/common.sh@366 -- # decimal 2
00:05:32.641     13:31:32 alias_rpc -- scripts/common.sh@353 -- # local d=2
00:05:32.641     13:31:32 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:32.641     13:31:32 alias_rpc -- scripts/common.sh@355 -- # echo 2
00:05:32.641    13:31:32 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:05:32.641    13:31:32 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:32.641    13:31:32 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:32.641    13:31:32 alias_rpc -- scripts/common.sh@368 -- # return 0
00:05:32.641    13:31:32 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:32.641    13:31:32 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:05:32.641  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:32.641  		--rc genhtml_branch_coverage=1
00:05:32.641  		--rc genhtml_function_coverage=1
00:05:32.641  		--rc genhtml_legend=1
00:05:32.641  		--rc geninfo_all_blocks=1
00:05:32.641  		--rc geninfo_unexecuted_blocks=1
00:05:32.641  		
00:05:32.641  		'
00:05:32.641    13:31:32 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:05:32.641  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:32.641  		--rc genhtml_branch_coverage=1
00:05:32.641  		--rc genhtml_function_coverage=1
00:05:32.641  		--rc genhtml_legend=1
00:05:32.641  		--rc geninfo_all_blocks=1
00:05:32.641  		--rc geninfo_unexecuted_blocks=1
00:05:32.641  		
00:05:32.641  		'
00:05:32.641    13:31:32 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:05:32.641  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:32.641  		--rc genhtml_branch_coverage=1
00:05:32.641  		--rc genhtml_function_coverage=1
00:05:32.641  		--rc genhtml_legend=1
00:05:32.641  		--rc geninfo_all_blocks=1
00:05:32.641  		--rc geninfo_unexecuted_blocks=1
00:05:32.641  		
00:05:32.641  		'
00:05:32.641    13:31:32 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:05:32.641  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:32.641  		--rc genhtml_branch_coverage=1
00:05:32.641  		--rc genhtml_function_coverage=1
00:05:32.641  		--rc genhtml_legend=1
00:05:32.641  		--rc geninfo_all_blocks=1
00:05:32.641  		--rc geninfo_unexecuted_blocks=1
00:05:32.641  		
00:05:32.641  		'
00:05:32.641   13:31:32 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR
00:05:32.641   13:31:32 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3124413
00:05:32.641   13:31:32 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3124413
00:05:32.641   13:31:32 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt
00:05:32.641   13:31:32 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 3124413 ']'
00:05:32.641   13:31:32 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:32.641   13:31:32 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:32.641   13:31:32 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:32.641  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:32.641   13:31:32 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:32.641   13:31:32 alias_rpc -- common/autotest_common.sh@10 -- # set +x
00:05:32.900  [2024-12-14 13:31:32.441878] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:05:32.900  [2024-12-14 13:31:32.441977] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3124413 ]
00:05:32.900  [2024-12-14 13:31:32.571142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:33.159  [2024-12-14 13:31:32.665815] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:05:33.728   13:31:33 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:33.728   13:31:33 alias_rpc -- common/autotest_common.sh@868 -- # return 0
00:05:33.728   13:31:33 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_config -i
00:05:33.988   13:31:33 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3124413
00:05:33.988   13:31:33 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 3124413 ']'
00:05:33.988   13:31:33 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 3124413
00:05:33.988    13:31:33 alias_rpc -- common/autotest_common.sh@959 -- # uname
00:05:33.988   13:31:33 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:33.988    13:31:33 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3124413
00:05:33.988   13:31:33 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:05:33.988   13:31:33 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:05:33.988   13:31:33 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3124413'
00:05:33.988  killing process with pid 3124413
00:05:33.988   13:31:33 alias_rpc -- common/autotest_common.sh@973 -- # kill 3124413
00:05:33.988   13:31:33 alias_rpc -- common/autotest_common.sh@978 -- # wait 3124413
00:05:36.525  
00:05:36.525  real	0m3.726s
00:05:36.525  user	0m3.684s
00:05:36.525  sys	0m0.638s
00:05:36.525   13:31:35 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:36.525   13:31:35 alias_rpc -- common/autotest_common.sh@10 -- # set +x
00:05:36.525  ************************************
00:05:36.525  END TEST alias_rpc
00:05:36.525  ************************************
00:05:36.525   13:31:35  -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]]
00:05:36.525   13:31:35  -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh
00:05:36.525   13:31:35  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:36.525   13:31:35  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:36.525   13:31:35  -- common/autotest_common.sh@10 -- # set +x
00:05:36.525  ************************************
00:05:36.525  START TEST spdkcli_tcp
00:05:36.525  ************************************
00:05:36.525   13:31:35 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh
00:05:36.525  * Looking for test storage...
00:05:36.525  * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli
00:05:36.525    13:31:36 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:05:36.525     13:31:36 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version
00:05:36.525     13:31:36 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:05:36.525    13:31:36 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:05:36.525    13:31:36 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:36.525    13:31:36 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:36.525    13:31:36 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:36.525    13:31:36 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-:
00:05:36.525    13:31:36 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1
00:05:36.525    13:31:36 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-:
00:05:36.525    13:31:36 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2
00:05:36.525    13:31:36 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<'
00:05:36.525    13:31:36 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2
00:05:36.525    13:31:36 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1
00:05:36.525    13:31:36 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:36.525    13:31:36 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in
00:05:36.525    13:31:36 spdkcli_tcp -- scripts/common.sh@345 -- # : 1
00:05:36.525    13:31:36 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:36.525    13:31:36 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:36.525     13:31:36 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1
00:05:36.525     13:31:36 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1
00:05:36.525     13:31:36 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:36.525     13:31:36 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1
00:05:36.525    13:31:36 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1
00:05:36.525     13:31:36 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2
00:05:36.525     13:31:36 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2
00:05:36.525     13:31:36 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:36.525     13:31:36 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2
00:05:36.525    13:31:36 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2
00:05:36.525    13:31:36 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:36.525    13:31:36 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:36.525    13:31:36 spdkcli_tcp -- scripts/common.sh@368 -- # return 0
00:05:36.525    13:31:36 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:36.525    13:31:36 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:05:36.525  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:36.525  		--rc genhtml_branch_coverage=1
00:05:36.525  		--rc genhtml_function_coverage=1
00:05:36.525  		--rc genhtml_legend=1
00:05:36.525  		--rc geninfo_all_blocks=1
00:05:36.525  		--rc geninfo_unexecuted_blocks=1
00:05:36.525  		
00:05:36.525  		'
00:05:36.525    13:31:36 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:05:36.525  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:36.525  		--rc genhtml_branch_coverage=1
00:05:36.525  		--rc genhtml_function_coverage=1
00:05:36.525  		--rc genhtml_legend=1
00:05:36.525  		--rc geninfo_all_blocks=1
00:05:36.525  		--rc geninfo_unexecuted_blocks=1
00:05:36.525  		
00:05:36.525  		'
00:05:36.525    13:31:36 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:05:36.525  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:36.525  		--rc genhtml_branch_coverage=1
00:05:36.525  		--rc genhtml_function_coverage=1
00:05:36.525  		--rc genhtml_legend=1
00:05:36.525  		--rc geninfo_all_blocks=1
00:05:36.525  		--rc geninfo_unexecuted_blocks=1
00:05:36.525  		
00:05:36.525  		'
00:05:36.525    13:31:36 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:05:36.525  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:36.525  		--rc genhtml_branch_coverage=1
00:05:36.525  		--rc genhtml_function_coverage=1
00:05:36.525  		--rc genhtml_legend=1
00:05:36.525  		--rc geninfo_all_blocks=1
00:05:36.525  		--rc geninfo_unexecuted_blocks=1
00:05:36.525  		
00:05:36.525  		'
00:05:36.525   13:31:36 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh
00:05:36.525    13:31:36 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py
00:05:36.525    13:31:36 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py
00:05:36.525   13:31:36 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1
00:05:36.525   13:31:36 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998
00:05:36.525   13:31:36 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT
00:05:36.525   13:31:36 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp
00:05:36.525   13:31:36 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable
00:05:36.525   13:31:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:05:36.525   13:31:36 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3125142
00:05:36.525   13:31:36 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3125142
00:05:36.525   13:31:36 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0
00:05:36.525   13:31:36 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 3125142 ']'
00:05:36.525   13:31:36 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:36.525   13:31:36 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:36.525   13:31:36 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:36.525  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:36.525   13:31:36 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:36.525   13:31:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:05:36.525  [2024-12-14 13:31:36.241057] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:05:36.525  [2024-12-14 13:31:36.241175] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3125142 ]
00:05:36.785  [2024-12-14 13:31:36.370095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:05:36.785  [2024-12-14 13:31:36.467536] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:05:36.785  [2024-12-14 13:31:36.467545] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:05:37.723   13:31:37 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:37.723   13:31:37 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0
00:05:37.723   13:31:37 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3125284
00:05:37.723   13:31:37 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock
00:05:37.723   13:31:37 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods
00:05:37.723  [
00:05:37.723    "bdev_malloc_delete",
00:05:37.723    "bdev_malloc_create",
00:05:37.723    "bdev_null_resize",
00:05:37.723    "bdev_null_delete",
00:05:37.723    "bdev_null_create",
00:05:37.723    "bdev_nvme_cuse_unregister",
00:05:37.723    "bdev_nvme_cuse_register",
00:05:37.723    "bdev_opal_new_user",
00:05:37.723    "bdev_opal_set_lock_state",
00:05:37.723    "bdev_opal_delete",
00:05:37.723    "bdev_opal_get_info",
00:05:37.723    "bdev_opal_create",
00:05:37.723    "bdev_nvme_opal_revert",
00:05:37.723    "bdev_nvme_opal_init",
00:05:37.723    "bdev_nvme_send_cmd",
00:05:37.723    "bdev_nvme_set_keys",
00:05:37.723    "bdev_nvme_get_path_iostat",
00:05:37.723    "bdev_nvme_get_mdns_discovery_info",
00:05:37.723    "bdev_nvme_stop_mdns_discovery",
00:05:37.723    "bdev_nvme_start_mdns_discovery",
00:05:37.723    "bdev_nvme_set_multipath_policy",
00:05:37.723    "bdev_nvme_set_preferred_path",
00:05:37.723    "bdev_nvme_get_io_paths",
00:05:37.723    "bdev_nvme_remove_error_injection",
00:05:37.723    "bdev_nvme_add_error_injection",
00:05:37.723    "bdev_nvme_get_discovery_info",
00:05:37.723    "bdev_nvme_stop_discovery",
00:05:37.723    "bdev_nvme_start_discovery",
00:05:37.723    "bdev_nvme_get_controller_health_info",
00:05:37.723    "bdev_nvme_disable_controller",
00:05:37.723    "bdev_nvme_enable_controller",
00:05:37.723    "bdev_nvme_reset_controller",
00:05:37.723    "bdev_nvme_get_transport_statistics",
00:05:37.723    "bdev_nvme_apply_firmware",
00:05:37.723    "bdev_nvme_detach_controller",
00:05:37.723    "bdev_nvme_get_controllers",
00:05:37.723    "bdev_nvme_attach_controller",
00:05:37.723    "bdev_nvme_set_hotplug",
00:05:37.723    "bdev_nvme_set_options",
00:05:37.723    "bdev_passthru_delete",
00:05:37.723    "bdev_passthru_create",
00:05:37.723    "bdev_lvol_set_parent_bdev",
00:05:37.723    "bdev_lvol_set_parent",
00:05:37.723    "bdev_lvol_check_shallow_copy",
00:05:37.723    "bdev_lvol_start_shallow_copy",
00:05:37.723    "bdev_lvol_grow_lvstore",
00:05:37.723    "bdev_lvol_get_lvols",
00:05:37.723    "bdev_lvol_get_lvstores",
00:05:37.724    "bdev_lvol_delete",
00:05:37.724    "bdev_lvol_set_read_only",
00:05:37.724    "bdev_lvol_resize",
00:05:37.724    "bdev_lvol_decouple_parent",
00:05:37.724    "bdev_lvol_inflate",
00:05:37.724    "bdev_lvol_rename",
00:05:37.724    "bdev_lvol_clone_bdev",
00:05:37.724    "bdev_lvol_clone",
00:05:37.724    "bdev_lvol_snapshot",
00:05:37.724    "bdev_lvol_create",
00:05:37.724    "bdev_lvol_delete_lvstore",
00:05:37.724    "bdev_lvol_rename_lvstore",
00:05:37.724    "bdev_lvol_create_lvstore",
00:05:37.724    "bdev_raid_set_options",
00:05:37.724    "bdev_raid_remove_base_bdev",
00:05:37.724    "bdev_raid_add_base_bdev",
00:05:37.724    "bdev_raid_delete",
00:05:37.724    "bdev_raid_create",
00:05:37.724    "bdev_raid_get_bdevs",
00:05:37.724    "bdev_error_inject_error",
00:05:37.724    "bdev_error_delete",
00:05:37.724    "bdev_error_create",
00:05:37.724    "bdev_split_delete",
00:05:37.724    "bdev_split_create",
00:05:37.724    "bdev_delay_delete",
00:05:37.724    "bdev_delay_create",
00:05:37.724    "bdev_delay_update_latency",
00:05:37.724    "bdev_zone_block_delete",
00:05:37.724    "bdev_zone_block_create",
00:05:37.724    "blobfs_create",
00:05:37.724    "blobfs_detect",
00:05:37.724    "blobfs_set_cache_size",
00:05:37.724    "bdev_aio_delete",
00:05:37.724    "bdev_aio_rescan",
00:05:37.724    "bdev_aio_create",
00:05:37.724    "bdev_ftl_set_property",
00:05:37.724    "bdev_ftl_get_properties",
00:05:37.724    "bdev_ftl_get_stats",
00:05:37.724    "bdev_ftl_unmap",
00:05:37.724    "bdev_ftl_unload",
00:05:37.724    "bdev_ftl_delete",
00:05:37.724    "bdev_ftl_load",
00:05:37.724    "bdev_ftl_create",
00:05:37.724    "bdev_virtio_attach_controller",
00:05:37.724    "bdev_virtio_scsi_get_devices",
00:05:37.724    "bdev_virtio_detach_controller",
00:05:37.724    "bdev_virtio_blk_set_hotplug",
00:05:37.724    "bdev_iscsi_delete",
00:05:37.724    "bdev_iscsi_create",
00:05:37.724    "bdev_iscsi_set_options",
00:05:37.724    "accel_error_inject_error",
00:05:37.724    "ioat_scan_accel_module",
00:05:37.724    "dsa_scan_accel_module",
00:05:37.724    "iaa_scan_accel_module",
00:05:37.724    "keyring_file_remove_key",
00:05:37.724    "keyring_file_add_key",
00:05:37.724    "keyring_linux_set_options",
00:05:37.724    "fsdev_aio_delete",
00:05:37.724    "fsdev_aio_create",
00:05:37.724    "iscsi_get_histogram",
00:05:37.724    "iscsi_enable_histogram",
00:05:37.724    "iscsi_set_options",
00:05:37.724    "iscsi_get_auth_groups",
00:05:37.724    "iscsi_auth_group_remove_secret",
00:05:37.724    "iscsi_auth_group_add_secret",
00:05:37.724    "iscsi_delete_auth_group",
00:05:37.724    "iscsi_create_auth_group",
00:05:37.724    "iscsi_set_discovery_auth",
00:05:37.724    "iscsi_get_options",
00:05:37.724    "iscsi_target_node_request_logout",
00:05:37.724    "iscsi_target_node_set_redirect",
00:05:37.724    "iscsi_target_node_set_auth",
00:05:37.724    "iscsi_target_node_add_lun",
00:05:37.724    "iscsi_get_stats",
00:05:37.724    "iscsi_get_connections",
00:05:37.724    "iscsi_portal_group_set_auth",
00:05:37.724    "iscsi_start_portal_group",
00:05:37.724    "iscsi_delete_portal_group",
00:05:37.724    "iscsi_create_portal_group",
00:05:37.724    "iscsi_get_portal_groups",
00:05:37.724    "iscsi_delete_target_node",
00:05:37.724    "iscsi_target_node_remove_pg_ig_maps",
00:05:37.724    "iscsi_target_node_add_pg_ig_maps",
00:05:37.724    "iscsi_create_target_node",
00:05:37.724    "iscsi_get_target_nodes",
00:05:37.724    "iscsi_delete_initiator_group",
00:05:37.724    "iscsi_initiator_group_remove_initiators",
00:05:37.724    "iscsi_initiator_group_add_initiators",
00:05:37.724    "iscsi_create_initiator_group",
00:05:37.724    "iscsi_get_initiator_groups",
00:05:37.724    "nvmf_set_crdt",
00:05:37.724    "nvmf_set_config",
00:05:37.724    "nvmf_set_max_subsystems",
00:05:37.724    "nvmf_stop_mdns_prr",
00:05:37.724    "nvmf_publish_mdns_prr",
00:05:37.724    "nvmf_subsystem_get_listeners",
00:05:37.724    "nvmf_subsystem_get_qpairs",
00:05:37.724    "nvmf_subsystem_get_controllers",
00:05:37.724    "nvmf_get_stats",
00:05:37.724    "nvmf_get_transports",
00:05:37.724    "nvmf_create_transport",
00:05:37.724    "nvmf_get_targets",
00:05:37.724    "nvmf_delete_target",
00:05:37.724    "nvmf_create_target",
00:05:37.724    "nvmf_subsystem_allow_any_host",
00:05:37.724    "nvmf_subsystem_set_keys",
00:05:37.724    "nvmf_subsystem_remove_host",
00:05:37.724    "nvmf_subsystem_add_host",
00:05:37.724    "nvmf_ns_remove_host",
00:05:37.724    "nvmf_ns_add_host",
00:05:37.724    "nvmf_subsystem_remove_ns",
00:05:37.724    "nvmf_subsystem_set_ns_ana_group",
00:05:37.724    "nvmf_subsystem_add_ns",
00:05:37.724    "nvmf_subsystem_listener_set_ana_state",
00:05:37.724    "nvmf_discovery_get_referrals",
00:05:37.724    "nvmf_discovery_remove_referral",
00:05:37.724    "nvmf_discovery_add_referral",
00:05:37.724    "nvmf_subsystem_remove_listener",
00:05:37.724    "nvmf_subsystem_add_listener",
00:05:37.724    "nvmf_delete_subsystem",
00:05:37.724    "nvmf_create_subsystem",
00:05:37.724    "nvmf_get_subsystems",
00:05:37.724    "env_dpdk_get_mem_stats",
00:05:37.724    "nbd_get_disks",
00:05:37.724    "nbd_stop_disk",
00:05:37.724    "nbd_start_disk",
00:05:37.724    "ublk_recover_disk",
00:05:37.724    "ublk_get_disks",
00:05:37.724    "ublk_stop_disk",
00:05:37.724    "ublk_start_disk",
00:05:37.724    "ublk_destroy_target",
00:05:37.724    "ublk_create_target",
00:05:37.724    "virtio_blk_create_transport",
00:05:37.724    "virtio_blk_get_transports",
00:05:37.724    "vhost_controller_set_coalescing",
00:05:37.724    "vhost_get_controllers",
00:05:37.724    "vhost_delete_controller",
00:05:37.724    "vhost_create_blk_controller",
00:05:37.724    "vhost_scsi_controller_remove_target",
00:05:37.724    "vhost_scsi_controller_add_target",
00:05:37.724    "vhost_start_scsi_controller",
00:05:37.724    "vhost_create_scsi_controller",
00:05:37.724    "thread_set_cpumask",
00:05:37.724    "scheduler_set_options",
00:05:37.724    "framework_get_governor",
00:05:37.724    "framework_get_scheduler",
00:05:37.724    "framework_set_scheduler",
00:05:37.724    "framework_get_reactors",
00:05:37.724    "thread_get_io_channels",
00:05:37.724    "thread_get_pollers",
00:05:37.724    "thread_get_stats",
00:05:37.724    "framework_monitor_context_switch",
00:05:37.724    "spdk_kill_instance",
00:05:37.724    "log_enable_timestamps",
00:05:37.724    "log_get_flags",
00:05:37.724    "log_clear_flag",
00:05:37.724    "log_set_flag",
00:05:37.724    "log_get_level",
00:05:37.724    "log_set_level",
00:05:37.724    "log_get_print_level",
00:05:37.724    "log_set_print_level",
00:05:37.724    "framework_enable_cpumask_locks",
00:05:37.724    "framework_disable_cpumask_locks",
00:05:37.724    "framework_wait_init",
00:05:37.724    "framework_start_init",
00:05:37.724    "scsi_get_devices",
00:05:37.724    "bdev_get_histogram",
00:05:37.724    "bdev_enable_histogram",
00:05:37.724    "bdev_set_qos_limit",
00:05:37.724    "bdev_set_qd_sampling_period",
00:05:37.724    "bdev_get_bdevs",
00:05:37.724    "bdev_reset_iostat",
00:05:37.724    "bdev_get_iostat",
00:05:37.724    "bdev_examine",
00:05:37.724    "bdev_wait_for_examine",
00:05:37.724    "bdev_set_options",
00:05:37.724    "accel_get_stats",
00:05:37.724    "accel_set_options",
00:05:37.724    "accel_set_driver",
00:05:37.724    "accel_crypto_key_destroy",
00:05:37.724    "accel_crypto_keys_get",
00:05:37.724    "accel_crypto_key_create",
00:05:37.724    "accel_assign_opc",
00:05:37.724    "accel_get_module_info",
00:05:37.724    "accel_get_opc_assignments",
00:05:37.724    "vmd_rescan",
00:05:37.724    "vmd_remove_device",
00:05:37.724    "vmd_enable",
00:05:37.724    "sock_get_default_impl",
00:05:37.724    "sock_set_default_impl",
00:05:37.724    "sock_impl_set_options",
00:05:37.724    "sock_impl_get_options",
00:05:37.724    "iobuf_get_stats",
00:05:37.724    "iobuf_set_options",
00:05:37.724    "keyring_get_keys",
00:05:37.724    "framework_get_pci_devices",
00:05:37.724    "framework_get_config",
00:05:37.724    "framework_get_subsystems",
00:05:37.724    "fsdev_set_opts",
00:05:37.724    "fsdev_get_opts",
00:05:37.724    "trace_get_info",
00:05:37.724    "trace_get_tpoint_group_mask",
00:05:37.724    "trace_disable_tpoint_group",
00:05:37.724    "trace_enable_tpoint_group",
00:05:37.724    "trace_clear_tpoint_mask",
00:05:37.724    "trace_set_tpoint_mask",
00:05:37.724    "notify_get_notifications",
00:05:37.724    "notify_get_types",
00:05:37.724    "spdk_get_version",
00:05:37.724    "rpc_get_methods"
00:05:37.724  ]
00:05:37.724   13:31:37 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp
00:05:37.724   13:31:37 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable
00:05:37.724   13:31:37 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:05:37.984   13:31:37 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT
00:05:37.984   13:31:37 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3125142
00:05:37.984   13:31:37 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 3125142 ']'
00:05:37.984   13:31:37 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 3125142
00:05:37.984    13:31:37 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname
00:05:37.984   13:31:37 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:37.984    13:31:37 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3125142
00:05:37.984   13:31:37 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:05:37.984   13:31:37 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:05:37.984   13:31:37 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3125142'
00:05:37.984  killing process with pid 3125142
00:05:37.984   13:31:37 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 3125142
00:05:37.984   13:31:37 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 3125142
00:05:40.520  
00:05:40.520  real	0m3.824s
00:05:40.520  user	0m6.897s
00:05:40.520  sys	0m0.666s
00:05:40.520   13:31:39 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:40.520   13:31:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:05:40.520  ************************************
00:05:40.520  END TEST spdkcli_tcp
00:05:40.520  ************************************
00:05:40.520   13:31:39  -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh
00:05:40.520   13:31:39  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:40.520   13:31:39  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:40.520   13:31:39  -- common/autotest_common.sh@10 -- # set +x
00:05:40.520  ************************************
00:05:40.520  START TEST dpdk_mem_utility
00:05:40.520  ************************************
00:05:40.520   13:31:39 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh
00:05:40.520  * Looking for test storage...
00:05:40.520  * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility
00:05:40.520    13:31:39 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:05:40.520     13:31:39 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version
00:05:40.520     13:31:39 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:05:40.520    13:31:40 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:05:40.520    13:31:40 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:40.520    13:31:40 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:40.520    13:31:40 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:40.520    13:31:40 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-:
00:05:40.520    13:31:40 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1
00:05:40.520    13:31:40 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-:
00:05:40.520    13:31:40 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2
00:05:40.520    13:31:40 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<'
00:05:40.520    13:31:40 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2
00:05:40.520    13:31:40 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1
00:05:40.520    13:31:40 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:40.520    13:31:40 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in
00:05:40.520    13:31:40 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1
00:05:40.520    13:31:40 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:40.520    13:31:40 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:40.520     13:31:40 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1
00:05:40.520     13:31:40 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1
00:05:40.520     13:31:40 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:40.520     13:31:40 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1
00:05:40.520    13:31:40 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1
00:05:40.520     13:31:40 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2
00:05:40.520     13:31:40 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2
00:05:40.520     13:31:40 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:40.520     13:31:40 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2
00:05:40.520    13:31:40 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2
00:05:40.520    13:31:40 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:40.520    13:31:40 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:40.520    13:31:40 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0
00:05:40.520    13:31:40 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:40.520    13:31:40 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:05:40.520  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:40.520  		--rc genhtml_branch_coverage=1
00:05:40.520  		--rc genhtml_function_coverage=1
00:05:40.520  		--rc genhtml_legend=1
00:05:40.520  		--rc geninfo_all_blocks=1
00:05:40.520  		--rc geninfo_unexecuted_blocks=1
00:05:40.520  		
00:05:40.520  		'
00:05:40.520    13:31:40 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:05:40.520  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:40.520  		--rc genhtml_branch_coverage=1
00:05:40.520  		--rc genhtml_function_coverage=1
00:05:40.520  		--rc genhtml_legend=1
00:05:40.520  		--rc geninfo_all_blocks=1
00:05:40.520  		--rc geninfo_unexecuted_blocks=1
00:05:40.520  		
00:05:40.520  		'
00:05:40.520    13:31:40 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:05:40.520  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:40.520  		--rc genhtml_branch_coverage=1
00:05:40.520  		--rc genhtml_function_coverage=1
00:05:40.520  		--rc genhtml_legend=1
00:05:40.520  		--rc geninfo_all_blocks=1
00:05:40.520  		--rc geninfo_unexecuted_blocks=1
00:05:40.520  		
00:05:40.520  		'
00:05:40.520    13:31:40 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:05:40.520  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:40.520  		--rc genhtml_branch_coverage=1
00:05:40.520  		--rc genhtml_function_coverage=1
00:05:40.520  		--rc genhtml_legend=1
00:05:40.520  		--rc geninfo_all_blocks=1
00:05:40.520  		--rc geninfo_unexecuted_blocks=1
00:05:40.520  		
00:05:40.520  		'
00:05:40.520   13:31:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py
00:05:40.520   13:31:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3125884
00:05:40.520   13:31:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3125884
00:05:40.520   13:31:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt
00:05:40.520   13:31:40 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 3125884 ']'
00:05:40.520   13:31:40 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:40.520   13:31:40 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:40.520   13:31:40 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:40.520  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:40.520   13:31:40 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:40.520   13:31:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x
00:05:40.520  [2024-12-14 13:31:40.136230] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:05:40.521  [2024-12-14 13:31:40.136322] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3125884 ]
00:05:40.779  [2024-12-14 13:31:40.270416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:40.779  [2024-12-14 13:31:40.372149] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:05:41.717   13:31:41 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:41.717   13:31:41 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0
00:05:41.717   13:31:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT
00:05:41.717   13:31:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats
00:05:41.717   13:31:41 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:41.717   13:31:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x
00:05:41.717  {
00:05:41.717  "filename": "/tmp/spdk_mem_dump.txt"
00:05:41.717  }
00:05:41.717   13:31:41 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:41.717   13:31:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py
00:05:41.717  DPDK memory size 824.000000 MiB in 1 heap(s)
00:05:41.717  1 heaps totaling size 824.000000 MiB
00:05:41.717    size:  824.000000 MiB heap id: 0
00:05:41.717  end heaps----------
00:05:41.717  9 mempools totaling size 603.782043 MiB
00:05:41.717    size:  212.674988 MiB name: PDU_immediate_data_Pool
00:05:41.717    size:  158.602051 MiB name: PDU_data_out_Pool
00:05:41.717    size:  100.555481 MiB name: bdev_io_3125884
00:05:41.717    size:   50.003479 MiB name: msgpool_3125884
00:05:41.717    size:   36.509338 MiB name: fsdev_io_3125884
00:05:41.717    size:   21.763794 MiB name: PDU_Pool
00:05:41.717    size:   19.513306 MiB name: SCSI_TASK_Pool
00:05:41.717    size:    4.133484 MiB name: evtpool_3125884
00:05:41.717    size:    0.026123 MiB name: Session_Pool
00:05:41.717  end mempools-------
00:05:41.717  6 memzones totaling size 4.142822 MiB
00:05:41.717    size:    1.000366 MiB name: RG_ring_0_3125884
00:05:41.717    size:    1.000366 MiB name: RG_ring_1_3125884
00:05:41.717    size:    1.000366 MiB name: RG_ring_4_3125884
00:05:41.717    size:    1.000366 MiB name: RG_ring_5_3125884
00:05:41.717    size:    0.125366 MiB name: RG_ring_2_3125884
00:05:41.717    size:    0.015991 MiB name: RG_ring_3_3125884
00:05:41.717  end memzones-------
00:05:41.717   13:31:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0
00:05:41.717  heap id: 0 total size: 824.000000 MiB number of busy elements: 44 number of free elements: 19
00:05:41.717    list of free elements. size: 16.847595 MiB
00:05:41.717      element at address: 0x200006400000 with size:    1.995972 MiB
00:05:41.717      element at address: 0x20000a600000 with size:    1.995972 MiB
00:05:41.717      element at address: 0x200003e00000 with size:    1.991028 MiB
00:05:41.717      element at address: 0x200019500040 with size:    0.999939 MiB
00:05:41.717      element at address: 0x200019900040 with size:    0.999939 MiB
00:05:41.717      element at address: 0x200019a00000 with size:    0.999329 MiB
00:05:41.717      element at address: 0x200000400000 with size:    0.998108 MiB
00:05:41.717      element at address: 0x200032600000 with size:    0.994324 MiB
00:05:41.717      element at address: 0x200019200000 with size:    0.959900 MiB
00:05:41.717      element at address: 0x200019d00040 with size:    0.937256 MiB
00:05:41.717      element at address: 0x200000200000 with size:    0.716980 MiB
00:05:41.717      element at address: 0x20001b400000 with size:    0.583191 MiB
00:05:41.717      element at address: 0x200000c00000 with size:    0.495300 MiB
00:05:41.717      element at address: 0x200019600000 with size:    0.491150 MiB
00:05:41.717      element at address: 0x200019e00000 with size:    0.485657 MiB
00:05:41.717      element at address: 0x200012c00000 with size:    0.436157 MiB
00:05:41.717      element at address: 0x200028800000 with size:    0.411072 MiB
00:05:41.717      element at address: 0x200000800000 with size:    0.355286 MiB
00:05:41.717      element at address: 0x20000a5ff040 with size:    0.001038 MiB
00:05:41.717    list of standard malloc elements. size: 199.221497 MiB
00:05:41.717      element at address: 0x20000a7fef80 with size:  132.000183 MiB
00:05:41.717      element at address: 0x2000065fef80 with size:   64.000183 MiB
00:05:41.717      element at address: 0x2000193fff80 with size:    1.000183 MiB
00:05:41.717      element at address: 0x2000197fff80 with size:    1.000183 MiB
00:05:41.717      element at address: 0x200019bfff80 with size:    1.000183 MiB
00:05:41.717      element at address: 0x2000003d9e80 with size:    0.140808 MiB
00:05:41.717      element at address: 0x200019deff40 with size:    0.062683 MiB
00:05:41.717      element at address: 0x2000003fdf40 with size:    0.007996 MiB
00:05:41.717      element at address: 0x200012bff040 with size:    0.000427 MiB
00:05:41.717      element at address: 0x200012bffa00 with size:    0.000366 MiB
00:05:41.717      element at address: 0x2000002d7b00 with size:    0.000244 MiB
00:05:41.717      element at address: 0x2000003d9d80 with size:    0.000244 MiB
00:05:41.717      element at address: 0x2000004ff840 with size:    0.000244 MiB
00:05:41.717      element at address: 0x2000004ff940 with size:    0.000244 MiB
00:05:41.717      element at address: 0x2000004ffa40 with size:    0.000244 MiB
00:05:41.717      element at address: 0x2000004ffcc0 with size:    0.000244 MiB
00:05:41.717      element at address: 0x2000004ffdc0 with size:    0.000244 MiB
00:05:41.717      element at address: 0x20000087f3c0 with size:    0.000244 MiB
00:05:41.717      element at address: 0x20000087f4c0 with size:    0.000244 MiB
00:05:41.718      element at address: 0x2000008ff800 with size:    0.000244 MiB
00:05:41.718      element at address: 0x2000008ffa80 with size:    0.000244 MiB
00:05:41.718      element at address: 0x200000cfef00 with size:    0.000244 MiB
00:05:41.718      element at address: 0x200000cff000 with size:    0.000244 MiB
00:05:41.718      element at address: 0x20000a5ff480 with size:    0.000244 MiB
00:05:41.718      element at address: 0x20000a5ff580 with size:    0.000244 MiB
00:05:41.718      element at address: 0x20000a5ff680 with size:    0.000244 MiB
00:05:41.718      element at address: 0x20000a5ff780 with size:    0.000244 MiB
00:05:41.718      element at address: 0x20000a5ff880 with size:    0.000244 MiB
00:05:41.718      element at address: 0x20000a5ff980 with size:    0.000244 MiB
00:05:41.718      element at address: 0x20000a5ffc00 with size:    0.000244 MiB
00:05:41.718      element at address: 0x20000a5ffd00 with size:    0.000244 MiB
00:05:41.718      element at address: 0x20000a5ffe00 with size:    0.000244 MiB
00:05:41.718      element at address: 0x20000a5fff00 with size:    0.000244 MiB
00:05:41.718      element at address: 0x200012bff200 with size:    0.000244 MiB
00:05:41.718      element at address: 0x200012bff300 with size:    0.000244 MiB
00:05:41.718      element at address: 0x200012bff400 with size:    0.000244 MiB
00:05:41.718      element at address: 0x200012bff500 with size:    0.000244 MiB
00:05:41.718      element at address: 0x200012bff600 with size:    0.000244 MiB
00:05:41.718      element at address: 0x200012bff700 with size:    0.000244 MiB
00:05:41.718      element at address: 0x200012bff800 with size:    0.000244 MiB
00:05:41.718      element at address: 0x200012bff900 with size:    0.000244 MiB
00:05:41.718      element at address: 0x200012bffb80 with size:    0.000244 MiB
00:05:41.718      element at address: 0x200012bffc80 with size:    0.000244 MiB
00:05:41.718      element at address: 0x200012bfff00 with size:    0.000244 MiB
00:05:41.718    list of memzone associated elements. size: 607.930908 MiB
00:05:41.718      element at address: 0x20001b4954c0 with size:  211.416809 MiB
00:05:41.718        associated memzone info: size:  211.416626 MiB name: MP_PDU_immediate_data_Pool_0
00:05:41.718      element at address: 0x20002886ff80 with size:  157.562622 MiB
00:05:41.718        associated memzone info: size:  157.562439 MiB name: MP_PDU_data_out_Pool_0
00:05:41.718      element at address: 0x200012df1e40 with size:  100.055115 MiB
00:05:41.718        associated memzone info: size:  100.054932 MiB name: MP_bdev_io_3125884_0
00:05:41.718      element at address: 0x200000dff340 with size:   48.003113 MiB
00:05:41.718        associated memzone info: size:   48.002930 MiB name: MP_msgpool_3125884_0
00:05:41.718      element at address: 0x200003ffdb40 with size:   36.008972 MiB
00:05:41.718        associated memzone info: size:   36.008789 MiB name: MP_fsdev_io_3125884_0
00:05:41.718      element at address: 0x200019fbe900 with size:   20.255615 MiB
00:05:41.718        associated memzone info: size:   20.255432 MiB name: MP_PDU_Pool_0
00:05:41.718      element at address: 0x2000327feb00 with size:   18.005127 MiB
00:05:41.718        associated memzone info: size:   18.004944 MiB name: MP_SCSI_TASK_Pool_0
00:05:41.718      element at address: 0x2000004ffec0 with size:    3.000305 MiB
00:05:41.718        associated memzone info: size:    3.000122 MiB name: MP_evtpool_3125884_0
00:05:41.718      element at address: 0x2000009ffdc0 with size:    2.000549 MiB
00:05:41.718        associated memzone info: size:    2.000366 MiB name: RG_MP_msgpool_3125884
00:05:41.718      element at address: 0x2000002d7c00 with size:    1.008179 MiB
00:05:41.718        associated memzone info: size:    1.007996 MiB name: MP_evtpool_3125884
00:05:41.718      element at address: 0x2000196fde00 with size:    1.008179 MiB
00:05:41.718        associated memzone info: size:    1.007996 MiB name: MP_PDU_Pool
00:05:41.718      element at address: 0x200019ebc780 with size:    1.008179 MiB
00:05:41.718        associated memzone info: size:    1.007996 MiB name: MP_PDU_immediate_data_Pool
00:05:41.718      element at address: 0x2000192fde00 with size:    1.008179 MiB
00:05:41.718        associated memzone info: size:    1.007996 MiB name: MP_PDU_data_out_Pool
00:05:41.718      element at address: 0x200012cefcc0 with size:    1.008179 MiB
00:05:41.718        associated memzone info: size:    1.007996 MiB name: MP_SCSI_TASK_Pool
00:05:41.718      element at address: 0x200000cff100 with size:    1.000549 MiB
00:05:41.718        associated memzone info: size:    1.000366 MiB name: RG_ring_0_3125884
00:05:41.718      element at address: 0x2000008ffb80 with size:    1.000549 MiB
00:05:41.718        associated memzone info: size:    1.000366 MiB name: RG_ring_1_3125884
00:05:41.718      element at address: 0x200019affd40 with size:    1.000549 MiB
00:05:41.718        associated memzone info: size:    1.000366 MiB name: RG_ring_4_3125884
00:05:41.718      element at address: 0x2000326fe8c0 with size:    1.000549 MiB
00:05:41.718        associated memzone info: size:    1.000366 MiB name: RG_ring_5_3125884
00:05:41.718      element at address: 0x20000087f5c0 with size:    0.500549 MiB
00:05:41.718        associated memzone info: size:    0.500366 MiB name: RG_MP_fsdev_io_3125884
00:05:41.718      element at address: 0x200000c7ecc0 with size:    0.500549 MiB
00:05:41.718        associated memzone info: size:    0.500366 MiB name: RG_MP_bdev_io_3125884
00:05:41.718      element at address: 0x20001967dbc0 with size:    0.500549 MiB
00:05:41.718        associated memzone info: size:    0.500366 MiB name: RG_MP_PDU_Pool
00:05:41.718      element at address: 0x200012c6fa80 with size:    0.500549 MiB
00:05:41.718        associated memzone info: size:    0.500366 MiB name: RG_MP_SCSI_TASK_Pool
00:05:41.718      element at address: 0x200019e7c540 with size:    0.250549 MiB
00:05:41.718        associated memzone info: size:    0.250366 MiB name: RG_MP_PDU_immediate_data_Pool
00:05:41.718      element at address: 0x2000002b78c0 with size:    0.125549 MiB
00:05:41.718        associated memzone info: size:    0.125366 MiB name: RG_MP_evtpool_3125884
00:05:41.718      element at address: 0x20000085f180 with size:    0.125549 MiB
00:05:41.718        associated memzone info: size:    0.125366 MiB name: RG_ring_2_3125884
00:05:41.718      element at address: 0x2000192f5bc0 with size:    0.031799 MiB
00:05:41.718        associated memzone info: size:    0.031616 MiB name: RG_MP_PDU_data_out_Pool
00:05:41.718      element at address: 0x2000288693c0 with size:    0.023804 MiB
00:05:41.718        associated memzone info: size:    0.023621 MiB name: MP_Session_Pool_0
00:05:41.718      element at address: 0x20000085af40 with size:    0.016174 MiB
00:05:41.718        associated memzone info: size:    0.015991 MiB name: RG_ring_3_3125884
00:05:41.718      element at address: 0x20002886f540 with size:    0.002502 MiB
00:05:41.718        associated memzone info: size:    0.002319 MiB name: RG_MP_Session_Pool
00:05:41.718      element at address: 0x2000004ffb40 with size:    0.000366 MiB
00:05:41.718        associated memzone info: size:    0.000183 MiB name: MP_msgpool_3125884
00:05:41.718      element at address: 0x2000008ff900 with size:    0.000366 MiB
00:05:41.718        associated memzone info: size:    0.000183 MiB name: MP_fsdev_io_3125884
00:05:41.718      element at address: 0x200012bffd80 with size:    0.000366 MiB
00:05:41.718        associated memzone info: size:    0.000183 MiB name: MP_bdev_io_3125884
00:05:41.718      element at address: 0x20000a5ffa80 with size:    0.000366 MiB
00:05:41.718        associated memzone info: size:    0.000183 MiB name: MP_Session_Pool
00:05:41.718   13:31:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT
00:05:41.718   13:31:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3125884
00:05:41.718   13:31:41 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 3125884 ']'
00:05:41.718   13:31:41 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 3125884
00:05:41.718    13:31:41 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname
00:05:41.718   13:31:41 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:41.718    13:31:41 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3125884
00:05:41.718   13:31:41 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:05:41.718   13:31:41 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:05:41.718   13:31:41 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3125884'
00:05:41.718  killing process with pid 3125884
00:05:41.718   13:31:41 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 3125884
00:05:41.718   13:31:41 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 3125884
00:05:44.256  
00:05:44.256  real	0m3.627s
00:05:44.256  user	0m3.530s
00:05:44.256  sys	0m0.632s
00:05:44.256   13:31:43 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:44.256   13:31:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x
00:05:44.256  ************************************
00:05:44.256  END TEST dpdk_mem_utility
00:05:44.256  ************************************
00:05:44.256   13:31:43  -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh
00:05:44.256   13:31:43  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:44.256   13:31:43  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:44.256   13:31:43  -- common/autotest_common.sh@10 -- # set +x
00:05:44.256  ************************************
00:05:44.256  START TEST event
00:05:44.256  ************************************
00:05:44.256   13:31:43 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh
00:05:44.256  * Looking for test storage...
00:05:44.256  * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event
00:05:44.256    13:31:43 event -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:05:44.256     13:31:43 event -- common/autotest_common.sh@1711 -- # lcov --version
00:05:44.256     13:31:43 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:05:44.256    13:31:43 event -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:05:44.256    13:31:43 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:44.256    13:31:43 event -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:44.256    13:31:43 event -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:44.256    13:31:43 event -- scripts/common.sh@336 -- # IFS=.-:
00:05:44.256    13:31:43 event -- scripts/common.sh@336 -- # read -ra ver1
00:05:44.256    13:31:43 event -- scripts/common.sh@337 -- # IFS=.-:
00:05:44.256    13:31:43 event -- scripts/common.sh@337 -- # read -ra ver2
00:05:44.256    13:31:43 event -- scripts/common.sh@338 -- # local 'op=<'
00:05:44.256    13:31:43 event -- scripts/common.sh@340 -- # ver1_l=2
00:05:44.256    13:31:43 event -- scripts/common.sh@341 -- # ver2_l=1
00:05:44.256    13:31:43 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:44.256    13:31:43 event -- scripts/common.sh@344 -- # case "$op" in
00:05:44.256    13:31:43 event -- scripts/common.sh@345 -- # : 1
00:05:44.256    13:31:43 event -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:44.256    13:31:43 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:44.256     13:31:43 event -- scripts/common.sh@365 -- # decimal 1
00:05:44.256     13:31:43 event -- scripts/common.sh@353 -- # local d=1
00:05:44.256     13:31:43 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:44.256     13:31:43 event -- scripts/common.sh@355 -- # echo 1
00:05:44.256    13:31:43 event -- scripts/common.sh@365 -- # ver1[v]=1
00:05:44.256     13:31:43 event -- scripts/common.sh@366 -- # decimal 2
00:05:44.256     13:31:43 event -- scripts/common.sh@353 -- # local d=2
00:05:44.256     13:31:43 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:44.256     13:31:43 event -- scripts/common.sh@355 -- # echo 2
00:05:44.256    13:31:43 event -- scripts/common.sh@366 -- # ver2[v]=2
00:05:44.256    13:31:43 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:44.256    13:31:43 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:44.256    13:31:43 event -- scripts/common.sh@368 -- # return 0
00:05:44.256    13:31:43 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:44.256    13:31:43 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:05:44.256  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:44.256  		--rc genhtml_branch_coverage=1
00:05:44.256  		--rc genhtml_function_coverage=1
00:05:44.256  		--rc genhtml_legend=1
00:05:44.256  		--rc geninfo_all_blocks=1
00:05:44.256  		--rc geninfo_unexecuted_blocks=1
00:05:44.256  		
00:05:44.256  		'
00:05:44.256    13:31:43 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:05:44.256  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:44.256  		--rc genhtml_branch_coverage=1
00:05:44.256  		--rc genhtml_function_coverage=1
00:05:44.256  		--rc genhtml_legend=1
00:05:44.256  		--rc geninfo_all_blocks=1
00:05:44.256  		--rc geninfo_unexecuted_blocks=1
00:05:44.256  		
00:05:44.256  		'
00:05:44.256    13:31:43 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:05:44.256  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:44.256  		--rc genhtml_branch_coverage=1
00:05:44.256  		--rc genhtml_function_coverage=1
00:05:44.256  		--rc genhtml_legend=1
00:05:44.256  		--rc geninfo_all_blocks=1
00:05:44.256  		--rc geninfo_unexecuted_blocks=1
00:05:44.256  		
00:05:44.256  		'
00:05:44.256    13:31:43 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:05:44.256  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:44.256  		--rc genhtml_branch_coverage=1
00:05:44.256  		--rc genhtml_function_coverage=1
00:05:44.256  		--rc genhtml_legend=1
00:05:44.256  		--rc geninfo_all_blocks=1
00:05:44.256  		--rc geninfo_unexecuted_blocks=1
00:05:44.256  		
00:05:44.256  		'
00:05:44.256   13:31:43 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/nbd_common.sh
00:05:44.256    13:31:43 event -- bdev/nbd_common.sh@6 -- # set -e
00:05:44.256   13:31:43 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1
00:05:44.256   13:31:43 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']'
00:05:44.257   13:31:43 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:44.257   13:31:43 event -- common/autotest_common.sh@10 -- # set +x
00:05:44.257  ************************************
00:05:44.257  START TEST event_perf
00:05:44.257  ************************************
00:05:44.257   13:31:43 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1
00:05:44.257  Running I/O for 1 seconds...[2024-12-14 13:31:43.785765] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:05:44.257  [2024-12-14 13:31:43.785857] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3126568 ]
00:05:44.257  [2024-12-14 13:31:43.914888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:05:44.516  [2024-12-14 13:31:44.016900] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:05:44.516  [2024-12-14 13:31:44.016977] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:05:44.516  [2024-12-14 13:31:44.017009] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:05:44.516  [2024-12-14 13:31:44.017021] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:05:45.895  Running I/O for 1 seconds...
00:05:45.895  lcore  0:   205780
00:05:45.895  lcore  1:   205777
00:05:45.895  lcore  2:   205777
00:05:45.895  lcore  3:   205779
00:05:45.895  done.
00:05:45.895  
00:05:45.895  real	0m1.488s
00:05:45.895  user	0m4.327s
00:05:45.895  sys	0m0.157s
00:05:45.895   13:31:45 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:45.895   13:31:45 event.event_perf -- common/autotest_common.sh@10 -- # set +x
00:05:45.895  ************************************
00:05:45.895  END TEST event_perf
00:05:45.895  ************************************
00:05:45.895   13:31:45 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1
00:05:45.895   13:31:45 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:05:45.895   13:31:45 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:45.895   13:31:45 event -- common/autotest_common.sh@10 -- # set +x
00:05:45.895  ************************************
00:05:45.895  START TEST event_reactor
00:05:45.895  ************************************
00:05:45.895   13:31:45 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1
00:05:45.895  [2024-12-14 13:31:45.352749] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:05:45.895  [2024-12-14 13:31:45.352827] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3126870 ]
00:05:45.895  [2024-12-14 13:31:45.479740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:45.895  [2024-12-14 13:31:45.579618] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:05:47.273  test_start
00:05:47.273  oneshot
00:05:47.273  tick 100
00:05:47.273  tick 100
00:05:47.273  tick 250
00:05:47.273  tick 100
00:05:47.273  tick 100
00:05:47.273  tick 100
00:05:47.273  tick 250
00:05:47.273  tick 500
00:05:47.273  tick 100
00:05:47.273  tick 100
00:05:47.273  tick 250
00:05:47.274  tick 100
00:05:47.274  tick 100
00:05:47.274  test_end
00:05:47.274  
00:05:47.274  real	0m1.476s
00:05:47.274  user	0m1.340s
00:05:47.274  sys	0m0.129s
00:05:47.274   13:31:46 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:47.274   13:31:46 event.event_reactor -- common/autotest_common.sh@10 -- # set +x
00:05:47.274  ************************************
00:05:47.274  END TEST event_reactor
00:05:47.274  ************************************
00:05:47.274   13:31:46 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1
00:05:47.274   13:31:46 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:05:47.274   13:31:46 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:47.274   13:31:46 event -- common/autotest_common.sh@10 -- # set +x
00:05:47.274  ************************************
00:05:47.274  START TEST event_reactor_perf
00:05:47.274  ************************************
00:05:47.274   13:31:46 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1
00:05:47.274  [2024-12-14 13:31:46.887107] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:05:47.274  [2024-12-14 13:31:46.887184] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3127155 ]
00:05:47.533  [2024-12-14 13:31:47.014670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:47.533  [2024-12-14 13:31:47.110224] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:05:48.912  test_start
00:05:48.912  test_end
00:05:48.912  Performance:   407363 events per second
00:05:48.912  
00:05:48.912  real	0m1.468s
00:05:48.912  user	0m1.312s
00:05:48.912  sys	0m0.150s
00:05:48.912   13:31:48 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:48.912   13:31:48 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x
00:05:48.912  ************************************
00:05:48.912  END TEST event_reactor_perf
00:05:48.912  ************************************
00:05:48.912    13:31:48 event -- event/event.sh@49 -- # uname -s
00:05:48.912   13:31:48 event -- event/event.sh@49 -- # '[' Linux = Linux ']'
00:05:48.912   13:31:48 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh
00:05:48.912   13:31:48 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:48.912   13:31:48 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:48.912   13:31:48 event -- common/autotest_common.sh@10 -- # set +x
00:05:48.912  ************************************
00:05:48.912  START TEST event_scheduler
00:05:48.912  ************************************
00:05:48.912   13:31:48 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh
00:05:48.912  * Looking for test storage...
00:05:48.912  * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler
00:05:48.912    13:31:48 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:05:48.912     13:31:48 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version
00:05:48.912     13:31:48 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:05:48.912    13:31:48 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:05:48.912    13:31:48 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:48.912    13:31:48 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:48.912    13:31:48 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:48.912    13:31:48 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-:
00:05:48.912    13:31:48 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1
00:05:48.912    13:31:48 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-:
00:05:48.912    13:31:48 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2
00:05:48.912    13:31:48 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<'
00:05:48.912    13:31:48 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2
00:05:48.912    13:31:48 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1
00:05:48.912    13:31:48 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:48.912    13:31:48 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in
00:05:48.912    13:31:48 event.event_scheduler -- scripts/common.sh@345 -- # : 1
00:05:48.912    13:31:48 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:48.912    13:31:48 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:48.912     13:31:48 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1
00:05:48.912     13:31:48 event.event_scheduler -- scripts/common.sh@353 -- # local d=1
00:05:48.912     13:31:48 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:48.912     13:31:48 event.event_scheduler -- scripts/common.sh@355 -- # echo 1
00:05:48.912    13:31:48 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1
00:05:48.912     13:31:48 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2
00:05:48.912     13:31:48 event.event_scheduler -- scripts/common.sh@353 -- # local d=2
00:05:48.912     13:31:48 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:48.912     13:31:48 event.event_scheduler -- scripts/common.sh@355 -- # echo 2
00:05:48.912    13:31:48 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2
00:05:48.912    13:31:48 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:48.912    13:31:48 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:48.912    13:31:48 event.event_scheduler -- scripts/common.sh@368 -- # return 0
00:05:48.912    13:31:48 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:48.912    13:31:48 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:05:48.912  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:48.912  		--rc genhtml_branch_coverage=1
00:05:48.912  		--rc genhtml_function_coverage=1
00:05:48.912  		--rc genhtml_legend=1
00:05:48.912  		--rc geninfo_all_blocks=1
00:05:48.912  		--rc geninfo_unexecuted_blocks=1
00:05:48.912  		
00:05:48.912  		'
00:05:48.912    13:31:48 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:05:48.912  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:48.912  		--rc genhtml_branch_coverage=1
00:05:48.912  		--rc genhtml_function_coverage=1
00:05:48.912  		--rc genhtml_legend=1
00:05:48.912  		--rc geninfo_all_blocks=1
00:05:48.912  		--rc geninfo_unexecuted_blocks=1
00:05:48.912  		
00:05:48.912  		'
00:05:48.912    13:31:48 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:05:48.912  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:48.912  		--rc genhtml_branch_coverage=1
00:05:48.912  		--rc genhtml_function_coverage=1
00:05:48.912  		--rc genhtml_legend=1
00:05:48.912  		--rc geninfo_all_blocks=1
00:05:48.912  		--rc geninfo_unexecuted_blocks=1
00:05:48.912  		
00:05:48.912  		'
00:05:48.912    13:31:48 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:05:48.912  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:48.912  		--rc genhtml_branch_coverage=1
00:05:48.912  		--rc genhtml_function_coverage=1
00:05:48.912  		--rc genhtml_legend=1
00:05:48.912  		--rc geninfo_all_blocks=1
00:05:48.912  		--rc geninfo_unexecuted_blocks=1
00:05:48.912  		
00:05:48.912  		'
00:05:48.912   13:31:48 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd
00:05:48.912   13:31:48 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3127569
00:05:48.912   13:31:48 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT
00:05:48.912   13:31:48 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f
00:05:48.912   13:31:48 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3127569
00:05:48.912   13:31:48 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 3127569 ']'
00:05:48.912   13:31:48 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:48.912   13:31:48 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:48.912   13:31:48 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:48.912  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:48.912   13:31:48 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:48.912   13:31:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:05:49.172  [2024-12-14 13:31:48.671769] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:05:49.172  [2024-12-14 13:31:48.671872] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3127569 ]
00:05:49.172  [2024-12-14 13:31:48.799672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:05:49.172  [2024-12-14 13:31:48.901653] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:05:49.172  [2024-12-14 13:31:48.901723] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:05:49.172  [2024-12-14 13:31:48.901776] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:05:49.172  [2024-12-14 13:31:48.901787] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:05:49.738   13:31:49 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:49.738   13:31:49 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0
00:05:49.739   13:31:49 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic
00:05:49.739   13:31:49 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:49.739   13:31:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:05:49.997  [2024-12-14 13:31:49.480274] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings
00:05:49.997  [2024-12-14 13:31:49.480303] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor
00:05:49.997  [2024-12-14 13:31:49.480326] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20
00:05:49.997  [2024-12-14 13:31:49.480339] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80
00:05:49.997  [2024-12-14 13:31:49.480355] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95
00:05:49.997   13:31:49 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:49.997   13:31:49 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init
00:05:49.997   13:31:49 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:49.997   13:31:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:05:50.256  [2024-12-14 13:31:49.757850] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started.
00:05:50.256   13:31:49 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:50.256   13:31:49 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread
00:05:50.256   13:31:49 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:50.256   13:31:49 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:50.256   13:31:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:05:50.256  ************************************
00:05:50.256  START TEST scheduler_create_thread
00:05:50.256  ************************************
00:05:50.256   13:31:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread
00:05:50.256   13:31:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100
00:05:50.256   13:31:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:50.256   13:31:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:05:50.256  2
00:05:50.256   13:31:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:50.256   13:31:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100
00:05:50.256   13:31:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:50.256   13:31:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:05:50.256  3
00:05:50.256   13:31:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:50.257   13:31:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100
00:05:50.257   13:31:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:50.257   13:31:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:05:50.257  4
00:05:50.257   13:31:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:50.257   13:31:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100
00:05:50.257   13:31:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:50.257   13:31:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:05:50.257  5
00:05:50.257   13:31:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:50.257   13:31:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0
00:05:50.257   13:31:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:50.257   13:31:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:05:50.257  6
00:05:50.257   13:31:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:50.257   13:31:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0
00:05:50.257   13:31:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:50.257   13:31:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:05:50.257  7
00:05:50.257   13:31:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:50.257   13:31:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0
00:05:50.257   13:31:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:50.257   13:31:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:05:50.257  8
00:05:50.257   13:31:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:50.257   13:31:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0
00:05:50.257   13:31:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:50.257   13:31:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:05:50.257  9
00:05:50.257   13:31:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:50.257   13:31:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30
00:05:50.257   13:31:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:50.257   13:31:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:05:50.257  10
00:05:50.257   13:31:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:50.257    13:31:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0
00:05:50.257    13:31:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:50.257    13:31:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:05:50.257    13:31:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:50.257   13:31:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11
00:05:50.257   13:31:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50
00:05:50.257   13:31:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:50.257   13:31:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:05:50.257   13:31:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:50.257    13:31:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100
00:05:50.257    13:31:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:50.257    13:31:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:05:51.196    13:31:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:51.196   13:31:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12
00:05:51.196   13:31:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12
00:05:51.196   13:31:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:51.196   13:31:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:05:52.574   13:31:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:52.574  
00:05:52.574  real	0m2.142s
00:05:52.574  user	0m0.012s
00:05:52.574  sys	0m0.005s
00:05:52.574   13:31:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:52.574   13:31:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:05:52.574  ************************************
00:05:52.574  END TEST scheduler_create_thread
00:05:52.574  ************************************
00:05:52.574   13:31:51 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT
00:05:52.574   13:31:51 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3127569
00:05:52.574   13:31:51 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 3127569 ']'
00:05:52.574   13:31:51 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 3127569
00:05:52.574    13:31:51 event.event_scheduler -- common/autotest_common.sh@959 -- # uname
00:05:52.574   13:31:51 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:52.574    13:31:51 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3127569
00:05:52.574   13:31:52 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:05:52.574   13:31:52 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:05:52.574   13:31:52 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3127569'
00:05:52.574  killing process with pid 3127569
00:05:52.574   13:31:52 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 3127569
00:05:52.574   13:31:52 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 3127569
00:05:52.833  [2024-12-14 13:31:52.416541] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped.
00:05:53.771  
00:05:53.771  real	0m5.109s
00:05:53.771  user	0m8.674s
00:05:53.771  sys	0m0.582s
00:05:53.771   13:31:53 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:53.771   13:31:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:05:53.771  ************************************
00:05:53.771  END TEST event_scheduler
00:05:53.771  ************************************
00:05:54.030   13:31:53 event -- event/event.sh@51 -- # modprobe -n nbd
00:05:54.030   13:31:53 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test
00:05:54.030   13:31:53 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:54.030   13:31:53 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:54.030   13:31:53 event -- common/autotest_common.sh@10 -- # set +x
00:05:54.030  ************************************
00:05:54.030  START TEST app_repeat
00:05:54.030  ************************************
00:05:54.030   13:31:53 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test
00:05:54.030   13:31:53 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:05:54.030   13:31:53 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:05:54.030   13:31:53 event.app_repeat -- event/event.sh@13 -- # local nbd_list
00:05:54.030   13:31:53 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1')
00:05:54.030   13:31:53 event.app_repeat -- event/event.sh@14 -- # local bdev_list
00:05:54.030   13:31:53 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4
00:05:54.030   13:31:53 event.app_repeat -- event/event.sh@17 -- # modprobe nbd
00:05:54.030   13:31:53 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3128505
00:05:54.030   13:31:53 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT
00:05:54.031   13:31:53 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3128505'
00:05:54.031  Process app_repeat pid: 3128505
00:05:54.031   13:31:53 event.app_repeat -- event/event.sh@23 -- # for i in {0..2}
00:05:54.031   13:31:53 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0'
00:05:54.031  spdk_app_start Round 0
00:05:54.031   13:31:53 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3128505 /var/tmp/spdk-nbd.sock
00:05:54.031   13:31:53 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3128505 ']'
00:05:54.031   13:31:53 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:05:54.031   13:31:53 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:54.031   13:31:53 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:05:54.031  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:05:54.031   13:31:53 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:54.031   13:31:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:05:54.031   13:31:53 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4
00:05:54.031  [2024-12-14 13:31:53.630439] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:05:54.031  [2024-12-14 13:31:53.630529] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3128505 ]
00:05:54.031  [2024-12-14 13:31:53.761096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:05:54.290  [2024-12-14 13:31:53.860730] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:05:54.290  [2024-12-14 13:31:53.860741] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:05:54.859   13:31:54 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:54.859   13:31:54 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:05:54.859   13:31:54 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:05:55.118  Malloc0
00:05:55.118   13:31:54 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:05:55.377  Malloc1
00:05:55.377   13:31:54 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:05:55.377   13:31:54 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:05:55.377   13:31:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:05:55.377   13:31:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list
00:05:55.377   13:31:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:05:55.377   13:31:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list
00:05:55.377   13:31:54 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:05:55.377   13:31:54 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:05:55.377   13:31:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:05:55.377   13:31:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list
00:05:55.377   13:31:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:05:55.377   13:31:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list
00:05:55.377   13:31:54 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i
00:05:55.377   13:31:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:05:55.377   13:31:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:05:55.377   13:31:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:05:55.636  /dev/nbd0
00:05:55.636    13:31:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:05:55.636   13:31:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:05:55.636   13:31:55 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:05:55.636   13:31:55 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:05:55.636   13:31:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:05:55.636   13:31:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:05:55.636   13:31:55 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:05:55.636   13:31:55 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:05:55.636   13:31:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:05:55.636   13:31:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:05:55.636   13:31:55 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:05:55.636  1+0 records in
00:05:55.636  1+0 records out
00:05:55.636  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000226543 s, 18.1 MB/s
00:05:55.636    13:31:55 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest
00:05:55.636   13:31:55 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:05:55.636   13:31:55 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest
00:05:55.636   13:31:55 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:05:55.636   13:31:55 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:05:55.636   13:31:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:05:55.636   13:31:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:05:55.636   13:31:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:05:55.894  /dev/nbd1
00:05:55.894    13:31:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:05:55.894   13:31:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:05:55.894   13:31:55 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:05:55.894   13:31:55 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:05:55.894   13:31:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:05:55.894   13:31:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:05:55.894   13:31:55 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:05:55.894   13:31:55 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:05:55.894   13:31:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:05:55.894   13:31:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:05:55.894   13:31:55 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:05:55.894  1+0 records in
00:05:55.894  1+0 records out
00:05:55.894  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00024338 s, 16.8 MB/s
00:05:55.894    13:31:55 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest
00:05:55.894   13:31:55 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:05:55.894   13:31:55 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest
00:05:55.894   13:31:55 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:05:55.894   13:31:55 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:05:55.894   13:31:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:05:55.894   13:31:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:05:55.894    13:31:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:05:55.894    13:31:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:05:55.894     13:31:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:05:55.894    13:31:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:05:55.894    {
00:05:55.894      "nbd_device": "/dev/nbd0",
00:05:55.894      "bdev_name": "Malloc0"
00:05:55.894    },
00:05:55.894    {
00:05:55.894      "nbd_device": "/dev/nbd1",
00:05:55.894      "bdev_name": "Malloc1"
00:05:55.894    }
00:05:55.894  ]'
00:05:55.894     13:31:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[
00:05:55.894    {
00:05:55.894      "nbd_device": "/dev/nbd0",
00:05:55.894      "bdev_name": "Malloc0"
00:05:55.894    },
00:05:55.894    {
00:05:55.894      "nbd_device": "/dev/nbd1",
00:05:55.894      "bdev_name": "Malloc1"
00:05:55.894    }
00:05:55.894  ]'
00:05:55.894     13:31:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:05:56.153    13:31:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:05:56.153  /dev/nbd1'
00:05:56.153     13:31:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:05:56.153  /dev/nbd1'
00:05:56.153     13:31:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:05:56.153    13:31:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2
00:05:56.153    13:31:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2
00:05:56.153   13:31:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2
00:05:56.153   13:31:55 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:05:56.153   13:31:55 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:05:56.153   13:31:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:05:56.153   13:31:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:05:56.153   13:31:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write
00:05:56.153   13:31:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest
00:05:56.153   13:31:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:05:56.153   13:31:55 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256
00:05:56.153  256+0 records in
00:05:56.153  256+0 records out
00:05:56.153  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010495 s, 99.9 MB/s
00:05:56.153   13:31:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:05:56.153   13:31:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:05:56.153  256+0 records in
00:05:56.153  256+0 records out
00:05:56.153  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0143593 s, 73.0 MB/s
00:05:56.153   13:31:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:05:56.153   13:31:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:05:56.153  256+0 records in
00:05:56.153  256+0 records out
00:05:56.153  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0171597 s, 61.1 MB/s
00:05:56.153   13:31:55 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:05:56.153   13:31:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:05:56.153   13:31:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:05:56.153   13:31:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify
00:05:56.153   13:31:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest
00:05:56.153   13:31:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:05:56.153   13:31:55 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:05:56.153   13:31:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:05:56.153   13:31:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0
00:05:56.153   13:31:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:05:56.153   13:31:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1
00:05:56.153   13:31:55 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest
00:05:56.153   13:31:55 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:05:56.153   13:31:55 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:05:56.153   13:31:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:05:56.153   13:31:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list
00:05:56.153   13:31:55 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i
00:05:56.153   13:31:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:05:56.153   13:31:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:05:56.413    13:31:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:05:56.413   13:31:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:05:56.413   13:31:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:05:56.413   13:31:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:05:56.413   13:31:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:05:56.413   13:31:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:05:56.413   13:31:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:05:56.413   13:31:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:05:56.413   13:31:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:05:56.413   13:31:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:05:56.672    13:31:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:05:56.672   13:31:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:05:56.672   13:31:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:05:56.672   13:31:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:05:56.672   13:31:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:05:56.672   13:31:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:05:56.672   13:31:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:05:56.672   13:31:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:05:56.672    13:31:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:05:56.672    13:31:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:05:56.672     13:31:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:05:56.672    13:31:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:05:56.672     13:31:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]'
00:05:56.672     13:31:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:05:56.672    13:31:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:05:56.672     13:31:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo ''
00:05:56.672     13:31:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:05:56.672     13:31:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # true
00:05:56.672    13:31:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0
00:05:56.672    13:31:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0
00:05:56.931   13:31:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0
00:05:56.931   13:31:56 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:05:56.931   13:31:56 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0
00:05:56.931   13:31:56 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:05:57.190   13:31:56 event.app_repeat -- event/event.sh@35 -- # sleep 3
00:05:58.570  [2024-12-14 13:31:57.916284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:05:58.570  [2024-12-14 13:31:58.012815] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:05:58.570  [2024-12-14 13:31:58.012816] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:05:58.570  [2024-12-14 13:31:58.183731] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:05:58.570  [2024-12-14 13:31:58.183786] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:06:00.037   13:31:59 event.app_repeat -- event/event.sh@23 -- # for i in {0..2}
00:06:00.037   13:31:59 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1'
00:06:00.037  spdk_app_start Round 1
00:06:00.037   13:31:59 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3128505 /var/tmp/spdk-nbd.sock
00:06:00.037   13:31:59 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3128505 ']'
00:06:00.037   13:31:59 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:06:00.037   13:31:59 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:00.037   13:31:59 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:06:00.037  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:06:00.037   13:31:59 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:00.037   13:31:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:06:00.296   13:31:59 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:00.296   13:31:59 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:06:00.296   13:31:59 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:06:00.556  Malloc0
00:06:00.556   13:32:00 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:06:00.815  Malloc1
00:06:00.815   13:32:00 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:06:00.815   13:32:00 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:00.815   13:32:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:06:00.815   13:32:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list
00:06:00.815   13:32:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:06:00.815   13:32:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list
00:06:00.815   13:32:00 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:06:00.815   13:32:00 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:00.815   13:32:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:06:00.815   13:32:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list
00:06:00.815   13:32:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:06:00.815   13:32:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list
00:06:00.815   13:32:00 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i
00:06:00.815   13:32:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:06:00.815   13:32:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:06:00.815   13:32:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:06:01.075  /dev/nbd0
00:06:01.075    13:32:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:06:01.075   13:32:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:06:01.075   13:32:00 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:06:01.075   13:32:00 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:06:01.075   13:32:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:06:01.075   13:32:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:06:01.075   13:32:00 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:06:01.075   13:32:00 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:06:01.075   13:32:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:06:01.075   13:32:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:06:01.075   13:32:00 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:06:01.075  1+0 records in
00:06:01.075  1+0 records out
00:06:01.075  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000222492 s, 18.4 MB/s
00:06:01.075    13:32:00 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest
00:06:01.075   13:32:00 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:06:01.075   13:32:00 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest
00:06:01.075   13:32:00 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:06:01.075   13:32:00 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:06:01.075   13:32:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:06:01.075   13:32:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:06:01.075   13:32:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:06:01.334  /dev/nbd1
00:06:01.334    13:32:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:06:01.334   13:32:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:06:01.334   13:32:00 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:06:01.334   13:32:00 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:06:01.334   13:32:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:06:01.334   13:32:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:06:01.334   13:32:00 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:06:01.334   13:32:00 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:06:01.334   13:32:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:06:01.334   13:32:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:06:01.334   13:32:00 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:06:01.334  1+0 records in
00:06:01.334  1+0 records out
00:06:01.334  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000250824 s, 16.3 MB/s
00:06:01.334    13:32:00 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest
00:06:01.334   13:32:00 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:06:01.334   13:32:00 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest
00:06:01.334   13:32:00 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:06:01.334   13:32:00 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:06:01.334   13:32:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:06:01.334   13:32:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:06:01.334    13:32:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:06:01.334    13:32:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:01.334     13:32:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:06:01.594    13:32:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:06:01.594    {
00:06:01.594      "nbd_device": "/dev/nbd0",
00:06:01.594      "bdev_name": "Malloc0"
00:06:01.594    },
00:06:01.594    {
00:06:01.594      "nbd_device": "/dev/nbd1",
00:06:01.594      "bdev_name": "Malloc1"
00:06:01.594    }
00:06:01.594  ]'
00:06:01.594     13:32:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[
00:06:01.594    {
00:06:01.594      "nbd_device": "/dev/nbd0",
00:06:01.594      "bdev_name": "Malloc0"
00:06:01.594    },
00:06:01.594    {
00:06:01.594      "nbd_device": "/dev/nbd1",
00:06:01.594      "bdev_name": "Malloc1"
00:06:01.594    }
00:06:01.594  ]'
00:06:01.594     13:32:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:06:01.594    13:32:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:06:01.594  /dev/nbd1'
00:06:01.594     13:32:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:06:01.594  /dev/nbd1'
00:06:01.594     13:32:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:06:01.594    13:32:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2
00:06:01.594    13:32:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2
00:06:01.594   13:32:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2
00:06:01.594   13:32:01 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:06:01.594   13:32:01 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:06:01.594   13:32:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:06:01.594   13:32:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:06:01.594   13:32:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write
00:06:01.594   13:32:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest
00:06:01.594   13:32:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:06:01.594   13:32:01 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256
00:06:01.594  256+0 records in
00:06:01.594  256+0 records out
00:06:01.594  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103035 s, 102 MB/s
00:06:01.594   13:32:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:06:01.594   13:32:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:06:01.594  256+0 records in
00:06:01.594  256+0 records out
00:06:01.594  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0145719 s, 72.0 MB/s
00:06:01.594   13:32:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:06:01.594   13:32:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:06:01.594  256+0 records in
00:06:01.594  256+0 records out
00:06:01.594  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0217702 s, 48.2 MB/s
00:06:01.594   13:32:01 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:06:01.594   13:32:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:06:01.594   13:32:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:06:01.594   13:32:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify
00:06:01.594   13:32:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest
00:06:01.594   13:32:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:06:01.594   13:32:01 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:06:01.594   13:32:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:06:01.594   13:32:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0
00:06:01.594   13:32:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:06:01.594   13:32:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1
00:06:01.594   13:32:01 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest
00:06:01.594   13:32:01 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:06:01.594   13:32:01 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:01.594   13:32:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:06:01.594   13:32:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list
00:06:01.594   13:32:01 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i
00:06:01.594   13:32:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:06:01.594   13:32:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:06:01.852    13:32:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:06:01.852   13:32:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:06:01.852   13:32:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:06:01.852   13:32:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:06:01.852   13:32:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:06:01.853   13:32:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:06:01.853   13:32:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:06:01.853   13:32:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:06:01.853   13:32:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:06:01.853   13:32:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:06:02.112    13:32:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:06:02.112   13:32:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:06:02.112   13:32:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:06:02.112   13:32:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:06:02.112   13:32:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:06:02.112   13:32:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:06:02.112   13:32:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:06:02.112   13:32:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:06:02.112    13:32:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:06:02.112    13:32:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:02.112     13:32:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:06:02.370    13:32:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:06:02.370     13:32:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]'
00:06:02.370     13:32:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:06:02.370    13:32:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:06:02.370     13:32:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:06:02.370     13:32:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo ''
00:06:02.370     13:32:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # true
00:06:02.370    13:32:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0
00:06:02.370    13:32:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0
00:06:02.370   13:32:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0
00:06:02.370   13:32:01 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:06:02.370   13:32:01 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0
00:06:02.370   13:32:01 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:06:02.628   13:32:02 event.app_repeat -- event/event.sh@35 -- # sleep 3
00:06:04.006  [2024-12-14 13:32:03.392991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:06:04.006  [2024-12-14 13:32:03.487667] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:06:04.006  [2024-12-14 13:32:03.487674] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:06:04.006  [2024-12-14 13:32:03.659142] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:06:04.006  [2024-12-14 13:32:03.659199] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:06:05.920   13:32:05 event.app_repeat -- event/event.sh@23 -- # for i in {0..2}
00:06:05.920   13:32:05 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2'
00:06:05.920  spdk_app_start Round 2
00:06:05.920   13:32:05 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3128505 /var/tmp/spdk-nbd.sock
00:06:05.920   13:32:05 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3128505 ']'
00:06:05.920   13:32:05 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:06:05.920   13:32:05 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:05.920   13:32:05 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:06:05.920  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:06:05.920   13:32:05 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:05.920   13:32:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:06:05.920   13:32:05 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:05.920   13:32:05 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:06:05.920   13:32:05 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:06:06.179  Malloc0
00:06:06.179   13:32:05 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:06:06.438  Malloc1
00:06:06.438   13:32:05 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:06:06.438   13:32:05 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:06.438   13:32:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:06:06.438   13:32:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list
00:06:06.438   13:32:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:06:06.438   13:32:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list
00:06:06.438   13:32:05 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:06:06.438   13:32:05 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:06.438   13:32:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:06:06.438   13:32:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list
00:06:06.438   13:32:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:06:06.438   13:32:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list
00:06:06.438   13:32:05 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i
00:06:06.438   13:32:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:06:06.438   13:32:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:06:06.438   13:32:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:06:06.438  /dev/nbd0
00:06:06.438    13:32:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:06:06.438   13:32:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:06:06.438   13:32:06 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:06:06.438   13:32:06 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:06:06.438   13:32:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:06:06.438   13:32:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:06:06.438   13:32:06 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:06:06.438   13:32:06 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:06:06.438   13:32:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:06:06.438   13:32:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:06:06.438   13:32:06 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:06:06.438  1+0 records in
00:06:06.438  1+0 records out
00:06:06.438  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000219727 s, 18.6 MB/s
00:06:06.438    13:32:06 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest
00:06:06.438   13:32:06 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:06:06.438   13:32:06 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest
00:06:06.697   13:32:06 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:06:06.697   13:32:06 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:06:06.697   13:32:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:06:06.697   13:32:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:06:06.697   13:32:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:06:06.697  /dev/nbd1
00:06:06.697    13:32:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:06:06.697   13:32:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:06:06.697   13:32:06 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:06:06.698   13:32:06 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:06:06.698   13:32:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:06:06.698   13:32:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:06:06.698   13:32:06 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:06:06.698   13:32:06 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:06:06.698   13:32:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:06:06.698   13:32:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:06:06.698   13:32:06 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:06:06.698  1+0 records in
00:06:06.698  1+0 records out
00:06:06.698  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000239122 s, 17.1 MB/s
00:06:06.698    13:32:06 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest
00:06:06.698   13:32:06 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:06:06.698   13:32:06 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest
00:06:06.698   13:32:06 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:06:06.698   13:32:06 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:06:06.698   13:32:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:06:06.698   13:32:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:06:06.698    13:32:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:06:06.698    13:32:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:06.698     13:32:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:06:06.957    13:32:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:06:06.957    {
00:06:06.957      "nbd_device": "/dev/nbd0",
00:06:06.957      "bdev_name": "Malloc0"
00:06:06.957    },
00:06:06.957    {
00:06:06.957      "nbd_device": "/dev/nbd1",
00:06:06.957      "bdev_name": "Malloc1"
00:06:06.957    }
00:06:06.957  ]'
00:06:06.957     13:32:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[
00:06:06.957    {
00:06:06.957      "nbd_device": "/dev/nbd0",
00:06:06.957      "bdev_name": "Malloc0"
00:06:06.957    },
00:06:06.957    {
00:06:06.957      "nbd_device": "/dev/nbd1",
00:06:06.957      "bdev_name": "Malloc1"
00:06:06.957    }
00:06:06.957  ]'
00:06:06.957     13:32:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:06:06.957    13:32:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:06:06.957  /dev/nbd1'
00:06:06.957     13:32:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:06:06.957  /dev/nbd1'
00:06:06.957     13:32:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:06:06.957    13:32:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2
00:06:06.957    13:32:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2
00:06:06.957   13:32:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2
00:06:06.957   13:32:06 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:06:06.957   13:32:06 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:06:06.957   13:32:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:06:06.957   13:32:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:06:06.957   13:32:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write
00:06:06.957   13:32:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest
00:06:06.957   13:32:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:06:06.957   13:32:06 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256
00:06:06.957  256+0 records in
00:06:06.957  256+0 records out
00:06:06.957  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103688 s, 101 MB/s
00:06:06.957   13:32:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:06:06.957   13:32:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:06:06.957  256+0 records in
00:06:06.957  256+0 records out
00:06:06.957  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0154109 s, 68.0 MB/s
00:06:06.957   13:32:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:06:06.957   13:32:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:06:07.216  256+0 records in
00:06:07.216  256+0 records out
00:06:07.216  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0205692 s, 51.0 MB/s
00:06:07.216   13:32:06 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:06:07.216   13:32:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:06:07.216   13:32:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:06:07.216   13:32:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify
00:06:07.216   13:32:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest
00:06:07.216   13:32:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:06:07.216   13:32:06 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:06:07.216   13:32:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:06:07.216   13:32:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0
00:06:07.216   13:32:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:06:07.216   13:32:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1
00:06:07.216   13:32:06 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest
00:06:07.216   13:32:06 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:06:07.216   13:32:06 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:07.216   13:32:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:06:07.216   13:32:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list
00:06:07.216   13:32:06 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i
00:06:07.216   13:32:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:06:07.216   13:32:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:06:07.216    13:32:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:06:07.216   13:32:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:06:07.216   13:32:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:06:07.216   13:32:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:06:07.216   13:32:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:06:07.216   13:32:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:06:07.217   13:32:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:06:07.217   13:32:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:06:07.217   13:32:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:06:07.217   13:32:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:06:07.476    13:32:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:06:07.476   13:32:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:06:07.476   13:32:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:06:07.476   13:32:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:06:07.476   13:32:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:06:07.476   13:32:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:06:07.476   13:32:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:06:07.476   13:32:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:06:07.476    13:32:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:06:07.476    13:32:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:07.476     13:32:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:06:07.735    13:32:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:06:07.735     13:32:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]'
00:06:07.735     13:32:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:06:07.735    13:32:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:06:07.735     13:32:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo ''
00:06:07.735     13:32:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:06:07.735     13:32:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # true
00:06:07.735    13:32:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0
00:06:07.735    13:32:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0
00:06:07.735   13:32:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0
00:06:07.735   13:32:07 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:06:07.735   13:32:07 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0
00:06:07.735   13:32:07 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:06:08.303   13:32:07 event.app_repeat -- event/event.sh@35 -- # sleep 3
00:06:09.240  [2024-12-14 13:32:08.894818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:06:09.500  [2024-12-14 13:32:08.991706] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:06:09.500  [2024-12-14 13:32:08.991706] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:06:09.500  [2024-12-14 13:32:09.159454] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:06:09.500  [2024-12-14 13:32:09.159507] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:06:11.404   13:32:10 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3128505 /var/tmp/spdk-nbd.sock
00:06:11.404   13:32:10 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3128505 ']'
00:06:11.404   13:32:10 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:06:11.404   13:32:10 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:11.405   13:32:10 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:06:11.405  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:06:11.405   13:32:10 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:11.405   13:32:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:06:11.405   13:32:10 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:11.405   13:32:10 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:06:11.405   13:32:10 event.app_repeat -- event/event.sh@39 -- # killprocess 3128505
00:06:11.405   13:32:10 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 3128505 ']'
00:06:11.405   13:32:10 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 3128505
00:06:11.405    13:32:10 event.app_repeat -- common/autotest_common.sh@959 -- # uname
00:06:11.405   13:32:10 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:11.405    13:32:10 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3128505
00:06:11.405   13:32:11 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:06:11.405   13:32:11 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:06:11.405   13:32:11 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3128505'
00:06:11.405  killing process with pid 3128505
00:06:11.405   13:32:11 event.app_repeat -- common/autotest_common.sh@973 -- # kill 3128505
00:06:11.405   13:32:11 event.app_repeat -- common/autotest_common.sh@978 -- # wait 3128505
00:06:12.342  spdk_app_start is called in Round 0.
00:06:12.342  Shutdown signal received, stop current app iteration
00:06:12.342  Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 reinitialization...
00:06:12.342  spdk_app_start is called in Round 1.
00:06:12.342  Shutdown signal received, stop current app iteration
00:06:12.342  Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 reinitialization...
00:06:12.342  spdk_app_start is called in Round 2.
00:06:12.342  Shutdown signal received, stop current app iteration
00:06:12.342  Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 reinitialization...
00:06:12.342  spdk_app_start is called in Round 3.
00:06:12.342  Shutdown signal received, stop current app iteration
00:06:12.342   13:32:11 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT
00:06:12.342   13:32:11 event.app_repeat -- event/event.sh@42 -- # return 0
00:06:12.342  
00:06:12.342  real	0m18.413s
00:06:12.342  user	0m38.428s
00:06:12.342  sys	0m3.145s
00:06:12.342   13:32:11 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:12.342   13:32:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:06:12.342  ************************************
00:06:12.342  END TEST app_repeat
00:06:12.343  ************************************
00:06:12.343   13:32:12 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 ))
00:06:12.343   13:32:12 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh
00:06:12.343   13:32:12 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:12.343   13:32:12 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:12.343   13:32:12 event -- common/autotest_common.sh@10 -- # set +x
00:06:12.343  ************************************
00:06:12.343  START TEST cpu_locks
00:06:12.343  ************************************
00:06:12.343   13:32:12 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh
00:06:12.602  * Looking for test storage...
00:06:12.602  * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event
00:06:12.602    13:32:12 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:06:12.602     13:32:12 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version
00:06:12.602     13:32:12 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:06:12.602    13:32:12 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:06:12.602    13:32:12 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:06:12.602    13:32:12 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l
00:06:12.602    13:32:12 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l
00:06:12.602    13:32:12 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-:
00:06:12.602    13:32:12 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1
00:06:12.602    13:32:12 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-:
00:06:12.602    13:32:12 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2
00:06:12.602    13:32:12 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<'
00:06:12.602    13:32:12 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2
00:06:12.603    13:32:12 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1
00:06:12.603    13:32:12 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:06:12.603    13:32:12 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in
00:06:12.603    13:32:12 event.cpu_locks -- scripts/common.sh@345 -- # : 1
00:06:12.603    13:32:12 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 ))
00:06:12.603    13:32:12 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:12.603     13:32:12 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1
00:06:12.603     13:32:12 event.cpu_locks -- scripts/common.sh@353 -- # local d=1
00:06:12.603     13:32:12 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:12.603     13:32:12 event.cpu_locks -- scripts/common.sh@355 -- # echo 1
00:06:12.603    13:32:12 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1
00:06:12.603     13:32:12 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2
00:06:12.603     13:32:12 event.cpu_locks -- scripts/common.sh@353 -- # local d=2
00:06:12.603     13:32:12 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:12.603     13:32:12 event.cpu_locks -- scripts/common.sh@355 -- # echo 2
00:06:12.603    13:32:12 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2
00:06:12.603    13:32:12 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:06:12.603    13:32:12 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:06:12.603    13:32:12 event.cpu_locks -- scripts/common.sh@368 -- # return 0
00:06:12.603    13:32:12 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:12.603    13:32:12 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:06:12.603  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:12.603  		--rc genhtml_branch_coverage=1
00:06:12.603  		--rc genhtml_function_coverage=1
00:06:12.603  		--rc genhtml_legend=1
00:06:12.603  		--rc geninfo_all_blocks=1
00:06:12.603  		--rc geninfo_unexecuted_blocks=1
00:06:12.603  		
00:06:12.603  		'
00:06:12.603    13:32:12 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:06:12.603  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:12.603  		--rc genhtml_branch_coverage=1
00:06:12.603  		--rc genhtml_function_coverage=1
00:06:12.603  		--rc genhtml_legend=1
00:06:12.603  		--rc geninfo_all_blocks=1
00:06:12.603  		--rc geninfo_unexecuted_blocks=1
00:06:12.603  		
00:06:12.603  		'
00:06:12.603    13:32:12 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:06:12.603  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:12.603  		--rc genhtml_branch_coverage=1
00:06:12.603  		--rc genhtml_function_coverage=1
00:06:12.603  		--rc genhtml_legend=1
00:06:12.603  		--rc geninfo_all_blocks=1
00:06:12.603  		--rc geninfo_unexecuted_blocks=1
00:06:12.603  		
00:06:12.603  		'
00:06:12.603    13:32:12 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:06:12.603  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:12.603  		--rc genhtml_branch_coverage=1
00:06:12.603  		--rc genhtml_function_coverage=1
00:06:12.603  		--rc genhtml_legend=1
00:06:12.603  		--rc geninfo_all_blocks=1
00:06:12.603  		--rc geninfo_unexecuted_blocks=1
00:06:12.603  		
00:06:12.603  		'
00:06:12.603   13:32:12 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock
00:06:12.603   13:32:12 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock
00:06:12.603   13:32:12 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT
00:06:12.603   13:32:12 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks
00:06:12.603   13:32:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:12.603   13:32:12 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:12.603   13:32:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:06:12.603  ************************************
00:06:12.603  START TEST default_locks
00:06:12.603  ************************************
00:06:12.603   13:32:12 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks
00:06:12.603   13:32:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3131942
00:06:12.603   13:32:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3131942
00:06:12.603   13:32:12 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3131942 ']'
00:06:12.603   13:32:12 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:12.603   13:32:12 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:12.603   13:32:12 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:12.603  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:12.603   13:32:12 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:12.603   13:32:12 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x
00:06:12.603   13:32:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1
00:06:12.862  [2024-12-14 13:32:12.382531] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:06:12.862  [2024-12-14 13:32:12.382648] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3131942 ]
00:06:12.862  [2024-12-14 13:32:12.514444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:13.122  [2024-12-14 13:32:12.608132] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:06:13.690   13:32:13 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:13.690   13:32:13 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0
00:06:13.690   13:32:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3131942
00:06:13.690   13:32:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3131942
00:06:13.690   13:32:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:06:14.258  lslocks: write error
00:06:14.258   13:32:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3131942
00:06:14.258   13:32:13 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 3131942 ']'
00:06:14.258   13:32:13 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 3131942
00:06:14.258    13:32:13 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname
00:06:14.258   13:32:13 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:14.258    13:32:13 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3131942
00:06:14.258   13:32:13 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:06:14.258   13:32:13 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:06:14.258   13:32:13 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3131942'
00:06:14.258  killing process with pid 3131942
00:06:14.258   13:32:13 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 3131942
00:06:14.258   13:32:13 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 3131942
00:06:16.795   13:32:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3131942
00:06:16.795   13:32:16 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0
00:06:16.795   13:32:16 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3131942
00:06:16.795   13:32:16 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten
00:06:16.795   13:32:16 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:06:16.795    13:32:16 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten
00:06:16.795   13:32:16 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:06:16.795   13:32:16 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 3131942
00:06:16.795   13:32:16 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3131942 ']'
00:06:16.795   13:32:16 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:16.795   13:32:16 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:16.795   13:32:16 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:16.795  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:16.795   13:32:16 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:16.795   13:32:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x
00:06:16.795  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3131942) - No such process
00:06:16.795  ERROR: process (pid: 3131942) is no longer running
00:06:16.795   13:32:16 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:16.795   13:32:16 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1
00:06:16.795   13:32:16 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1
00:06:16.795   13:32:16 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:06:16.795   13:32:16 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:06:16.795   13:32:16 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:06:16.795   13:32:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks
00:06:16.795   13:32:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=()
00:06:16.795   13:32:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files
00:06:16.795   13:32:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 ))
00:06:16.795  
00:06:16.795  real	0m3.766s
00:06:16.795  user	0m3.694s
00:06:16.795  sys	0m0.747s
00:06:16.795   13:32:16 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:16.795   13:32:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x
00:06:16.795  ************************************
00:06:16.795  END TEST default_locks
00:06:16.795  ************************************
00:06:16.795   13:32:16 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc
00:06:16.795   13:32:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:16.795   13:32:16 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:16.795   13:32:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:06:16.795  ************************************
00:06:16.795  START TEST default_locks_via_rpc
00:06:16.795  ************************************
00:06:16.795   13:32:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc
00:06:16.795   13:32:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3132608
00:06:16.795   13:32:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3132608
00:06:16.795   13:32:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3132608 ']'
00:06:16.795   13:32:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1
00:06:16.795   13:32:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:16.795   13:32:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:16.795   13:32:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:16.796  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:16.796   13:32:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:16.796   13:32:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:06:16.796  [2024-12-14 13:32:16.208951] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:06:16.796  [2024-12-14 13:32:16.209045] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3132608 ]
00:06:16.796  [2024-12-14 13:32:16.341464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:16.796  [2024-12-14 13:32:16.437472] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:06:17.733   13:32:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:17.733   13:32:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:06:17.733   13:32:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks
00:06:17.733   13:32:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:17.733   13:32:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:06:17.733   13:32:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:17.733   13:32:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks
00:06:17.733   13:32:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=()
00:06:17.733   13:32:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files
00:06:17.733   13:32:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 ))
00:06:17.733   13:32:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks
00:06:17.733   13:32:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:17.733   13:32:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:06:17.733   13:32:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:17.733   13:32:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3132608
00:06:17.733   13:32:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3132608
00:06:17.733   13:32:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:06:17.992   13:32:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3132608
00:06:17.992   13:32:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 3132608 ']'
00:06:17.992   13:32:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 3132608
00:06:17.992    13:32:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname
00:06:17.992   13:32:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:17.992    13:32:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3132608
00:06:18.251   13:32:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:06:18.251   13:32:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:06:18.251   13:32:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3132608'
00:06:18.251  killing process with pid 3132608
00:06:18.251   13:32:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 3132608
00:06:18.251   13:32:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 3132608
00:06:20.788  
00:06:20.788  real	0m3.818s
00:06:20.788  user	0m3.807s
00:06:20.788  sys	0m0.743s
00:06:20.788   13:32:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:20.788   13:32:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:06:20.788  ************************************
00:06:20.788  END TEST default_locks_via_rpc
00:06:20.788  ************************************
00:06:20.788   13:32:19 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask
00:06:20.788   13:32:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:20.788   13:32:19 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:20.788   13:32:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:06:20.788  ************************************
00:06:20.788  START TEST non_locking_app_on_locked_coremask
00:06:20.788  ************************************
00:06:20.788   13:32:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask
00:06:20.788   13:32:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3133341
00:06:20.788   13:32:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3133341 /var/tmp/spdk.sock
00:06:20.788   13:32:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1
00:06:20.788   13:32:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3133341 ']'
00:06:20.788   13:32:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:20.788   13:32:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:20.788   13:32:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:20.788  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:20.788   13:32:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:20.788   13:32:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:06:20.788  [2024-12-14 13:32:20.120036] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:06:20.788  [2024-12-14 13:32:20.120129] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3133341 ]
00:06:20.788  [2024-12-14 13:32:20.252736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:20.788  [2024-12-14 13:32:20.351094] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:06:21.726   13:32:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:21.726   13:32:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0
00:06:21.726   13:32:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock
00:06:21.726   13:32:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3133607
00:06:21.726   13:32:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3133607 /var/tmp/spdk2.sock
00:06:21.726   13:32:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3133607 ']'
00:06:21.726   13:32:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:06:21.726   13:32:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:21.726   13:32:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:06:21.726  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:06:21.726   13:32:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:21.726   13:32:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:06:21.726  [2024-12-14 13:32:21.186788] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:06:21.726  [2024-12-14 13:32:21.186879] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3133607 ]
00:06:21.726  [2024-12-14 13:32:21.366951] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:06:21.726  [2024-12-14 13:32:21.367003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:22.008  [2024-12-14 13:32:21.558113] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:06:24.544   13:32:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:24.544   13:32:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0
00:06:24.544   13:32:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3133341
00:06:24.544   13:32:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3133341
00:06:24.544   13:32:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:06:25.112  lslocks: write error
00:06:25.112   13:32:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3133341
00:06:25.112   13:32:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3133341 ']'
00:06:25.112   13:32:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3133341
00:06:25.112    13:32:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname
00:06:25.112   13:32:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:25.112    13:32:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3133341
00:06:25.112   13:32:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:06:25.112   13:32:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:06:25.112   13:32:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3133341'
00:06:25.112  killing process with pid 3133341
00:06:25.112   13:32:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3133341
00:06:25.112   13:32:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3133341
00:06:30.420   13:32:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3133607
00:06:30.420   13:32:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3133607 ']'
00:06:30.420   13:32:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3133607
00:06:30.420    13:32:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname
00:06:30.420   13:32:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:30.420    13:32:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3133607
00:06:30.420   13:32:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:06:30.420   13:32:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:06:30.420   13:32:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3133607'
00:06:30.420  killing process with pid 3133607
00:06:30.420   13:32:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3133607
00:06:30.420   13:32:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3133607
00:06:31.844  
00:06:31.844  real	0m11.371s
00:06:31.844  user	0m11.601s
00:06:31.844  sys	0m1.589s
00:06:31.844   13:32:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:31.844   13:32:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:06:31.844  ************************************
00:06:31.844  END TEST non_locking_app_on_locked_coremask
00:06:31.844  ************************************
00:06:31.844   13:32:31 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask
00:06:31.844   13:32:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:31.844   13:32:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:31.844   13:32:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:06:31.844  ************************************
00:06:31.844  START TEST locking_app_on_unlocked_coremask
00:06:31.844  ************************************
00:06:31.844   13:32:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask
00:06:31.844   13:32:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3135352
00:06:31.844   13:32:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3135352 /var/tmp/spdk.sock
00:06:31.844   13:32:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks
00:06:31.844   13:32:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3135352 ']'
00:06:31.844   13:32:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:31.844   13:32:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:31.844   13:32:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:31.844  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:31.844   13:32:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:31.844   13:32:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x
00:06:31.844  [2024-12-14 13:32:31.573590] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:06:31.844  [2024-12-14 13:32:31.573685] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3135352 ]
00:06:32.103  [2024-12-14 13:32:31.706337] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:06:32.103  [2024-12-14 13:32:31.706381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:32.103  [2024-12-14 13:32:31.804556] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:06:33.041   13:32:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:33.041   13:32:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0
00:06:33.041   13:32:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock
00:06:33.041   13:32:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3135525
00:06:33.041   13:32:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3135525 /var/tmp/spdk2.sock
00:06:33.041   13:32:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3135525 ']'
00:06:33.041   13:32:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:06:33.041   13:32:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:33.041   13:32:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:06:33.041  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:06:33.041   13:32:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:33.041   13:32:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x
00:06:33.041  [2024-12-14 13:32:32.630457] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:06:33.041  [2024-12-14 13:32:32.630548] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3135525 ]
00:06:33.300  [2024-12-14 13:32:32.810210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:33.300  [2024-12-14 13:32:33.008649] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:06:34.679   13:32:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:34.679   13:32:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0
00:06:34.679   13:32:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3135525
00:06:34.679   13:32:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3135525
00:06:34.679   13:32:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:06:36.058  lslocks: write error
00:06:36.058   13:32:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3135352
00:06:36.058   13:32:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3135352 ']'
00:06:36.058   13:32:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3135352
00:06:36.058    13:32:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname
00:06:36.058   13:32:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:36.058    13:32:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3135352
00:06:36.058   13:32:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:06:36.059   13:32:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:06:36.059   13:32:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3135352'
00:06:36.059  killing process with pid 3135352
00:06:36.059   13:32:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3135352
00:06:36.059   13:32:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3135352
00:06:41.334   13:32:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3135525
00:06:41.334   13:32:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3135525 ']'
00:06:41.334   13:32:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3135525
00:06:41.334    13:32:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname
00:06:41.334   13:32:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:41.334    13:32:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3135525
00:06:41.334   13:32:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:06:41.334   13:32:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:06:41.334   13:32:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3135525'
00:06:41.334  killing process with pid 3135525
00:06:41.334   13:32:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3135525
00:06:41.334   13:32:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3135525
00:06:42.715  
00:06:42.715  real	0m10.801s
00:06:42.715  user	0m10.890s
00:06:42.715  sys	0m1.611s
00:06:42.715   13:32:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:42.715   13:32:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x
00:06:42.715  ************************************
00:06:42.715  END TEST locking_app_on_unlocked_coremask
00:06:42.715  ************************************
00:06:42.715   13:32:42 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask
00:06:42.715   13:32:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:42.715   13:32:42 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:42.715   13:32:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:06:42.715  ************************************
00:06:42.715  START TEST locking_app_on_locked_coremask
00:06:42.715  ************************************
00:06:42.715   13:32:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask
00:06:42.715   13:32:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3137296
00:06:42.715   13:32:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3137296 /var/tmp/spdk.sock
00:06:42.715   13:32:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3137296 ']'
00:06:42.715   13:32:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:42.715   13:32:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:42.715   13:32:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:42.715  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:42.715   13:32:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:42.715   13:32:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:06:42.715   13:32:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1
00:06:42.715  [2024-12-14 13:32:42.452226] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:06:42.715  [2024-12-14 13:32:42.452324] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3137296 ]
00:06:42.974  [2024-12-14 13:32:42.583937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:42.974  [2024-12-14 13:32:42.681253] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:06:43.912   13:32:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:43.912   13:32:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0
00:06:43.912   13:32:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3137431
00:06:43.912   13:32:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3137431 /var/tmp/spdk2.sock
00:06:43.913   13:32:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0
00:06:43.913   13:32:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3137431 /var/tmp/spdk2.sock
00:06:43.913   13:32:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten
00:06:43.913   13:32:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock
00:06:43.913   13:32:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:06:43.913    13:32:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten
00:06:43.913   13:32:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:06:43.913   13:32:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3137431 /var/tmp/spdk2.sock
00:06:43.913   13:32:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3137431 ']'
00:06:43.913   13:32:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:06:43.913   13:32:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:43.913   13:32:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:06:43.913  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:06:43.913   13:32:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:43.913   13:32:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:06:43.913  [2024-12-14 13:32:43.489094] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:06:43.913  [2024-12-14 13:32:43.489189] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3137431 ]
00:06:44.171  [2024-12-14 13:32:43.671979] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3137296 has claimed it.
00:06:44.171  [2024-12-14 13:32:43.672035] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting.
00:06:44.430  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3137431) - No such process
00:06:44.430  ERROR: process (pid: 3137431) is no longer running
00:06:44.430   13:32:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:44.430   13:32:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1
00:06:44.430   13:32:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1
00:06:44.430   13:32:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:06:44.430   13:32:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:06:44.430   13:32:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:06:44.430   13:32:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3137296
00:06:44.430   13:32:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:06:44.430   13:32:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3137296
00:06:45.368  lslocks: write error
00:06:45.368   13:32:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3137296
00:06:45.368   13:32:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3137296 ']'
00:06:45.368   13:32:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3137296
00:06:45.368    13:32:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname
00:06:45.368   13:32:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:45.368    13:32:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3137296
00:06:45.368   13:32:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:06:45.368   13:32:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:06:45.368   13:32:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3137296'
00:06:45.368  killing process with pid 3137296
00:06:45.368   13:32:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3137296
00:06:45.368   13:32:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3137296
00:06:47.274  
00:06:47.274  real	0m4.624s
00:06:47.274  user	0m4.758s
00:06:47.274  sys	0m1.053s
00:06:47.274   13:32:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:47.274   13:32:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:06:47.274  ************************************
00:06:47.274  END TEST locking_app_on_locked_coremask
00:06:47.274  ************************************
00:06:47.534   13:32:47 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask
00:06:47.534   13:32:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:47.534   13:32:47 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:47.534   13:32:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:06:47.534  ************************************
00:06:47.534  START TEST locking_overlapped_coremask
00:06:47.534  ************************************
00:06:47.534   13:32:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask
00:06:47.534   13:32:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3138184
00:06:47.534   13:32:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3138184 /var/tmp/spdk.sock
00:06:47.534   13:32:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3138184 ']'
00:06:47.534   13:32:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:47.534   13:32:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:47.534   13:32:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:47.534  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:47.534   13:32:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:47.534   13:32:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x
00:06:47.534   13:32:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7
00:06:47.534  [2024-12-14 13:32:47.144084] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:06:47.534  [2024-12-14 13:32:47.144184] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3138184 ]
00:06:47.793  [2024-12-14 13:32:47.275489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:06:47.793  [2024-12-14 13:32:47.374746] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:06:47.793  [2024-12-14 13:32:47.374824] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:06:47.793  [2024-12-14 13:32:47.374829] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:06:48.731   13:32:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:48.731   13:32:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0
00:06:48.731   13:32:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3138268
00:06:48.731   13:32:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3138268 /var/tmp/spdk2.sock
00:06:48.731   13:32:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock
00:06:48.731   13:32:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0
00:06:48.731   13:32:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3138268 /var/tmp/spdk2.sock
00:06:48.731   13:32:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten
00:06:48.731   13:32:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:06:48.731    13:32:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten
00:06:48.731   13:32:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:06:48.731   13:32:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3138268 /var/tmp/spdk2.sock
00:06:48.731   13:32:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3138268 ']'
00:06:48.731   13:32:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:06:48.731   13:32:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:48.731   13:32:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:06:48.731  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:06:48.731   13:32:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:48.731   13:32:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x
00:06:48.731  [2024-12-14 13:32:48.219302] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:06:48.731  [2024-12-14 13:32:48.219396] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3138268 ]
00:06:48.731  [2024-12-14 13:32:48.406700] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3138184 has claimed it.
00:06:48.731  [2024-12-14 13:32:48.406758] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting.
00:06:49.299  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3138268) - No such process
00:06:49.299  ERROR: process (pid: 3138268) is no longer running
00:06:49.299   13:32:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:49.299   13:32:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1
00:06:49.299   13:32:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1
00:06:49.299   13:32:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:06:49.299   13:32:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:06:49.299   13:32:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:06:49.299   13:32:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks
00:06:49.299   13:32:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*)
00:06:49.299   13:32:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002})
00:06:49.299   13:32:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]]
00:06:49.299   13:32:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3138184
00:06:49.299   13:32:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 3138184 ']'
00:06:49.300   13:32:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 3138184
00:06:49.300    13:32:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname
00:06:49.300   13:32:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:49.300    13:32:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3138184
00:06:49.300   13:32:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:06:49.300   13:32:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:06:49.300   13:32:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3138184'
00:06:49.300  killing process with pid 3138184
00:06:49.300   13:32:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 3138184
00:06:49.300   13:32:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 3138184
00:06:51.837  
00:06:51.837  real	0m4.148s
00:06:51.837  user	0m11.360s
00:06:51.837  sys	0m0.689s
00:06:51.837   13:32:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:51.837   13:32:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x
00:06:51.837  ************************************
00:06:51.837  END TEST locking_overlapped_coremask
00:06:51.837  ************************************
00:06:51.837   13:32:51 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc
00:06:51.837   13:32:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:51.837   13:32:51 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:51.837   13:32:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:06:51.837  ************************************
00:06:51.837  START TEST locking_overlapped_coremask_via_rpc
00:06:51.837  ************************************
00:06:51.837   13:32:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc
00:06:51.837   13:32:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3138836
00:06:51.837   13:32:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3138836 /var/tmp/spdk.sock
00:06:51.837   13:32:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks
00:06:51.837   13:32:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3138836 ']'
00:06:51.837   13:32:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:51.837   13:32:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:51.837   13:32:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:51.837  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:51.837   13:32:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:51.837   13:32:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:06:51.837  [2024-12-14 13:32:51.379055] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:06:51.837  [2024-12-14 13:32:51.379141] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3138836 ]
00:06:51.837  [2024-12-14 13:32:51.511783] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:06:51.837  [2024-12-14 13:32:51.511830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:06:52.096  [2024-12-14 13:32:51.619171] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:06:52.096  [2024-12-14 13:32:51.619181] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:06:52.096  [2024-12-14 13:32:51.619186] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:06:52.665   13:32:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:52.665   13:32:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:06:52.665   13:32:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3139098
00:06:52.665   13:32:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3139098 /var/tmp/spdk2.sock
00:06:52.665   13:32:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks
00:06:52.665   13:32:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3139098 ']'
00:06:52.665   13:32:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:06:52.665   13:32:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:52.665   13:32:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:06:52.665  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:06:52.665   13:32:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:52.665   13:32:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:06:52.924  [2024-12-14 13:32:52.460006] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:06:52.924  [2024-12-14 13:32:52.460097] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3139098 ]
00:06:52.924  [2024-12-14 13:32:52.643937] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:06:52.924  [2024-12-14 13:32:52.643988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:06:53.184  [2024-12-14 13:32:52.858217] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:06:53.184  [2024-12-14 13:32:52.858308] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:06:53.184  [2024-12-14 13:32:52.858336] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4
00:06:55.720   13:32:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:55.720   13:32:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:06:55.720   13:32:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks
00:06:55.720   13:32:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:55.720   13:32:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:06:55.720   13:32:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:55.720   13:32:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:06:55.720   13:32:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0
00:06:55.720   13:32:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:06:55.720   13:32:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:06:55.720   13:32:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:06:55.720    13:32:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:06:55.720   13:32:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:06:55.720   13:32:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:06:55.720   13:32:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:55.720   13:32:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:06:55.720  [2024-12-14 13:32:54.969050] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3138836 has claimed it.
00:06:55.720  request:
00:06:55.720  {
00:06:55.720  "method": "framework_enable_cpumask_locks",
00:06:55.720  "req_id": 1
00:06:55.720  }
00:06:55.720  Got JSON-RPC error response
00:06:55.720  response:
00:06:55.720  {
00:06:55.720  "code": -32603,
00:06:55.720  "message": "Failed to claim CPU core: 2"
00:06:55.720  }
00:06:55.720   13:32:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:06:55.720   13:32:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1
00:06:55.720   13:32:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:06:55.720   13:32:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:06:55.720   13:32:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:06:55.720   13:32:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3138836 /var/tmp/spdk.sock
00:06:55.720   13:32:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3138836 ']'
00:06:55.720   13:32:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:55.720   13:32:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:55.720   13:32:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:55.720  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:55.720   13:32:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:55.720   13:32:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:06:55.720   13:32:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:55.720   13:32:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:06:55.720   13:32:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3139098 /var/tmp/spdk2.sock
00:06:55.720   13:32:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3139098 ']'
00:06:55.720   13:32:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:06:55.720   13:32:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:55.720   13:32:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:06:55.720  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:06:55.720   13:32:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:55.720   13:32:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:06:55.720   13:32:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:55.720   13:32:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:06:55.720   13:32:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks
00:06:55.720   13:32:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*)
00:06:55.720   13:32:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002})
00:06:55.720   13:32:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]]
00:06:55.720  
00:06:55.720  real	0m4.088s
00:06:55.720  user	0m1.100s
00:06:55.720  sys	0m0.209s
00:06:55.720   13:32:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:55.720   13:32:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:06:55.720  ************************************
00:06:55.720  END TEST locking_overlapped_coremask_via_rpc
00:06:55.720  ************************************
00:06:55.720   13:32:55 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup
00:06:55.720   13:32:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3138836 ]]
00:06:55.720   13:32:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3138836
00:06:55.720   13:32:55 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3138836 ']'
00:06:55.720   13:32:55 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3138836
00:06:55.720    13:32:55 event.cpu_locks -- common/autotest_common.sh@959 -- # uname
00:06:55.720   13:32:55 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:55.720    13:32:55 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3138836
00:06:55.979   13:32:55 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:06:55.979   13:32:55 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:06:55.979   13:32:55 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3138836'
00:06:55.979  killing process with pid 3138836
00:06:55.979   13:32:55 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3138836
00:06:55.979   13:32:55 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3138836
00:06:58.516   13:32:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3139098 ]]
00:06:58.516   13:32:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3139098
00:06:58.516   13:32:57 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3139098 ']'
00:06:58.516   13:32:57 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3139098
00:06:58.516    13:32:57 event.cpu_locks -- common/autotest_common.sh@959 -- # uname
00:06:58.516   13:32:57 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:58.516    13:32:57 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3139098
00:06:58.516   13:32:57 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:06:58.516   13:32:57 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:06:58.516   13:32:57 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3139098'
00:06:58.516  killing process with pid 3139098
00:06:58.516   13:32:57 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3139098
00:06:58.516   13:32:57 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3139098
00:07:00.423   13:33:00 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f
00:07:00.682   13:33:00 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup
00:07:00.682   13:33:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3138836 ]]
00:07:00.682   13:33:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3138836
00:07:00.682   13:33:00 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3138836 ']'
00:07:00.682   13:33:00 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3138836
00:07:00.682  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3138836) - No such process
00:07:00.682   13:33:00 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3138836 is not found'
00:07:00.682  Process with pid 3138836 is not found
00:07:00.682   13:33:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3139098 ]]
00:07:00.682   13:33:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3139098
00:07:00.682   13:33:00 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3139098 ']'
00:07:00.682   13:33:00 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3139098
00:07:00.682  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3139098) - No such process
00:07:00.682   13:33:00 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3139098 is not found'
00:07:00.682  Process with pid 3139098 is not found
00:07:00.682   13:33:00 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f
00:07:00.682  
00:07:00.682  real	0m48.100s
00:07:00.682  user	1m21.891s
00:07:00.682  sys	0m8.016s
00:07:00.682   13:33:00 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:00.682   13:33:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:07:00.682  ************************************
00:07:00.682  END TEST cpu_locks
00:07:00.682  ************************************
00:07:00.682  
00:07:00.682  real	1m16.659s
00:07:00.682  user	2m16.200s
00:07:00.682  sys	0m12.596s
00:07:00.682   13:33:00 event -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:00.682   13:33:00 event -- common/autotest_common.sh@10 -- # set +x
00:07:00.682  ************************************
00:07:00.682  END TEST event
00:07:00.682  ************************************
00:07:00.682   13:33:00  -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh
00:07:00.682   13:33:00  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:00.682   13:33:00  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:00.682   13:33:00  -- common/autotest_common.sh@10 -- # set +x
00:07:00.682  ************************************
00:07:00.682  START TEST thread
00:07:00.682  ************************************
00:07:00.682   13:33:00 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh
00:07:00.682  * Looking for test storage...
00:07:00.682  * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread
00:07:00.682    13:33:00 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:07:00.682     13:33:00 thread -- common/autotest_common.sh@1711 -- # lcov --version
00:07:00.682     13:33:00 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:07:00.942    13:33:00 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:07:00.942    13:33:00 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:00.942    13:33:00 thread -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:00.942    13:33:00 thread -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:00.942    13:33:00 thread -- scripts/common.sh@336 -- # IFS=.-:
00:07:00.942    13:33:00 thread -- scripts/common.sh@336 -- # read -ra ver1
00:07:00.942    13:33:00 thread -- scripts/common.sh@337 -- # IFS=.-:
00:07:00.942    13:33:00 thread -- scripts/common.sh@337 -- # read -ra ver2
00:07:00.942    13:33:00 thread -- scripts/common.sh@338 -- # local 'op=<'
00:07:00.942    13:33:00 thread -- scripts/common.sh@340 -- # ver1_l=2
00:07:00.942    13:33:00 thread -- scripts/common.sh@341 -- # ver2_l=1
00:07:00.942    13:33:00 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:00.942    13:33:00 thread -- scripts/common.sh@344 -- # case "$op" in
00:07:00.942    13:33:00 thread -- scripts/common.sh@345 -- # : 1
00:07:00.942    13:33:00 thread -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:00.942    13:33:00 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:00.942     13:33:00 thread -- scripts/common.sh@365 -- # decimal 1
00:07:00.942     13:33:00 thread -- scripts/common.sh@353 -- # local d=1
00:07:00.942     13:33:00 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:00.942     13:33:00 thread -- scripts/common.sh@355 -- # echo 1
00:07:00.942    13:33:00 thread -- scripts/common.sh@365 -- # ver1[v]=1
00:07:00.942     13:33:00 thread -- scripts/common.sh@366 -- # decimal 2
00:07:00.942     13:33:00 thread -- scripts/common.sh@353 -- # local d=2
00:07:00.942     13:33:00 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:00.942     13:33:00 thread -- scripts/common.sh@355 -- # echo 2
00:07:00.942    13:33:00 thread -- scripts/common.sh@366 -- # ver2[v]=2
00:07:00.942    13:33:00 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:00.942    13:33:00 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:00.942    13:33:00 thread -- scripts/common.sh@368 -- # return 0
00:07:00.942    13:33:00 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:00.942    13:33:00 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:07:00.942  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:00.942  		--rc genhtml_branch_coverage=1
00:07:00.942  		--rc genhtml_function_coverage=1
00:07:00.942  		--rc genhtml_legend=1
00:07:00.942  		--rc geninfo_all_blocks=1
00:07:00.942  		--rc geninfo_unexecuted_blocks=1
00:07:00.942  		
00:07:00.942  		'
00:07:00.942    13:33:00 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:07:00.942  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:00.942  		--rc genhtml_branch_coverage=1
00:07:00.942  		--rc genhtml_function_coverage=1
00:07:00.942  		--rc genhtml_legend=1
00:07:00.942  		--rc geninfo_all_blocks=1
00:07:00.942  		--rc geninfo_unexecuted_blocks=1
00:07:00.942  		
00:07:00.942  		'
00:07:00.942    13:33:00 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:07:00.942  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:00.942  		--rc genhtml_branch_coverage=1
00:07:00.942  		--rc genhtml_function_coverage=1
00:07:00.942  		--rc genhtml_legend=1
00:07:00.942  		--rc geninfo_all_blocks=1
00:07:00.942  		--rc geninfo_unexecuted_blocks=1
00:07:00.942  		
00:07:00.942  		'
00:07:00.942    13:33:00 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:07:00.942  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:00.942  		--rc genhtml_branch_coverage=1
00:07:00.942  		--rc genhtml_function_coverage=1
00:07:00.942  		--rc genhtml_legend=1
00:07:00.942  		--rc geninfo_all_blocks=1
00:07:00.942  		--rc geninfo_unexecuted_blocks=1
00:07:00.942  		
00:07:00.942  		'
00:07:00.942   13:33:00 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1
00:07:00.942   13:33:00 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']'
00:07:00.942   13:33:00 thread -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:00.942   13:33:00 thread -- common/autotest_common.sh@10 -- # set +x
00:07:00.942  ************************************
00:07:00.942  START TEST thread_poller_perf
00:07:00.942  ************************************
00:07:00.942   13:33:00 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1
00:07:00.942  [2024-12-14 13:33:00.550187] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:07:00.942  [2024-12-14 13:33:00.550273] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3140589 ]
00:07:01.201  [2024-12-14 13:33:00.683512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:01.201  [2024-12-14 13:33:00.788179] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:07:01.201  Running 1000 pollers for 1 seconds with 1 microseconds period.
00:07:02.579  
[2024-12-14T12:33:02.317Z]  ======================================
00:07:02.579  
[2024-12-14T12:33:02.317Z]  busy:2509459140 (cyc)
00:07:02.579  
[2024-12-14T12:33:02.317Z]  total_run_count: 403000
00:07:02.579  
[2024-12-14T12:33:02.317Z]  tsc_hz: 2500000000 (cyc)
00:07:02.579  
[2024-12-14T12:33:02.317Z]  ======================================
00:07:02.579  
[2024-12-14T12:33:02.317Z]  poller_cost: 6226 (cyc), 2490 (nsec)
00:07:02.579  
00:07:02.579  real	0m1.499s
00:07:02.579  user	0m1.338s
00:07:02.579  sys	0m0.153s
00:07:02.579   13:33:02 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:02.579   13:33:02 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x
00:07:02.579  ************************************
00:07:02.579  END TEST thread_poller_perf
00:07:02.579  ************************************
00:07:02.579   13:33:02 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1
00:07:02.579   13:33:02 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']'
00:07:02.579   13:33:02 thread -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:02.579   13:33:02 thread -- common/autotest_common.sh@10 -- # set +x
00:07:02.579  ************************************
00:07:02.579  START TEST thread_poller_perf
00:07:02.579  ************************************
00:07:02.579   13:33:02 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1
00:07:02.579  [2024-12-14 13:33:02.095489] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:07:02.579  [2024-12-14 13:33:02.095583] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3140981 ]
00:07:02.579  [2024-12-14 13:33:02.224838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:02.838  [2024-12-14 13:33:02.322040] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:07:02.838  Running 1000 pollers for 1 seconds with 0 microseconds period.
00:07:03.831  
[2024-12-14T12:33:03.569Z]  ======================================
00:07:03.831  
[2024-12-14T12:33:03.569Z]  busy:2503111000 (cyc)
00:07:03.831  
[2024-12-14T12:33:03.569Z]  total_run_count: 4944000
00:07:03.831  
[2024-12-14T12:33:03.569Z]  tsc_hz: 2500000000 (cyc)
00:07:03.831  
[2024-12-14T12:33:03.569Z]  ======================================
00:07:03.831  
[2024-12-14T12:33:03.569Z]  poller_cost: 506 (cyc), 202 (nsec)
00:07:03.831  
00:07:03.831  real	0m1.464s
00:07:03.831  user	0m1.324s
00:07:03.831  sys	0m0.133s
00:07:03.831   13:33:03 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:03.831   13:33:03 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x
00:07:03.831  ************************************
00:07:03.831  END TEST thread_poller_perf
00:07:03.831  ************************************
00:07:03.831   13:33:03 thread -- thread/thread.sh@17 -- # [[ y != \y ]]
00:07:03.831  
00:07:03.831  real	0m3.275s
00:07:03.831  user	0m2.813s
00:07:03.831  sys	0m0.478s
00:07:03.831   13:33:03 thread -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:03.831   13:33:03 thread -- common/autotest_common.sh@10 -- # set +x
00:07:03.831  ************************************
00:07:03.831  END TEST thread
00:07:03.831  ************************************
00:07:04.090   13:33:03  -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]]
00:07:04.090   13:33:03  -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh
00:07:04.090   13:33:03  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:04.090   13:33:03  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:04.090   13:33:03  -- common/autotest_common.sh@10 -- # set +x
00:07:04.090  ************************************
00:07:04.090  START TEST app_cmdline
00:07:04.090  ************************************
00:07:04.090   13:33:03 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh
00:07:04.090  * Looking for test storage...
00:07:04.090  * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app
00:07:04.090    13:33:03 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:07:04.090     13:33:03 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version
00:07:04.090     13:33:03 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:07:04.090    13:33:03 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:07:04.090    13:33:03 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:04.090    13:33:03 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:04.090    13:33:03 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:04.090    13:33:03 app_cmdline -- scripts/common.sh@336 -- # IFS=.-:
00:07:04.090    13:33:03 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1
00:07:04.090    13:33:03 app_cmdline -- scripts/common.sh@337 -- # IFS=.-:
00:07:04.090    13:33:03 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2
00:07:04.090    13:33:03 app_cmdline -- scripts/common.sh@338 -- # local 'op=<'
00:07:04.090    13:33:03 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2
00:07:04.349    13:33:03 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1
00:07:04.349    13:33:03 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:04.349    13:33:03 app_cmdline -- scripts/common.sh@344 -- # case "$op" in
00:07:04.349    13:33:03 app_cmdline -- scripts/common.sh@345 -- # : 1
00:07:04.349    13:33:03 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:04.349    13:33:03 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:04.349     13:33:03 app_cmdline -- scripts/common.sh@365 -- # decimal 1
00:07:04.349     13:33:03 app_cmdline -- scripts/common.sh@353 -- # local d=1
00:07:04.349     13:33:03 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:04.349     13:33:03 app_cmdline -- scripts/common.sh@355 -- # echo 1
00:07:04.349    13:33:03 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1
00:07:04.349     13:33:03 app_cmdline -- scripts/common.sh@366 -- # decimal 2
00:07:04.349     13:33:03 app_cmdline -- scripts/common.sh@353 -- # local d=2
00:07:04.349     13:33:03 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:04.349     13:33:03 app_cmdline -- scripts/common.sh@355 -- # echo 2
00:07:04.349    13:33:03 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2
00:07:04.349    13:33:03 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:04.349    13:33:03 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:04.349    13:33:03 app_cmdline -- scripts/common.sh@368 -- # return 0
00:07:04.349    13:33:03 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:04.349    13:33:03 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:07:04.350  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:04.350  		--rc genhtml_branch_coverage=1
00:07:04.350  		--rc genhtml_function_coverage=1
00:07:04.350  		--rc genhtml_legend=1
00:07:04.350  		--rc geninfo_all_blocks=1
00:07:04.350  		--rc geninfo_unexecuted_blocks=1
00:07:04.350  		
00:07:04.350  		'
00:07:04.350    13:33:03 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:07:04.350  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:04.350  		--rc genhtml_branch_coverage=1
00:07:04.350  		--rc genhtml_function_coverage=1
00:07:04.350  		--rc genhtml_legend=1
00:07:04.350  		--rc geninfo_all_blocks=1
00:07:04.350  		--rc geninfo_unexecuted_blocks=1
00:07:04.350  		
00:07:04.350  		'
00:07:04.350    13:33:03 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:07:04.350  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:04.350  		--rc genhtml_branch_coverage=1
00:07:04.350  		--rc genhtml_function_coverage=1
00:07:04.350  		--rc genhtml_legend=1
00:07:04.350  		--rc geninfo_all_blocks=1
00:07:04.350  		--rc geninfo_unexecuted_blocks=1
00:07:04.350  		
00:07:04.350  		'
00:07:04.350    13:33:03 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:07:04.350  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:04.350  		--rc genhtml_branch_coverage=1
00:07:04.350  		--rc genhtml_function_coverage=1
00:07:04.350  		--rc genhtml_legend=1
00:07:04.350  		--rc geninfo_all_blocks=1
00:07:04.350  		--rc geninfo_unexecuted_blocks=1
00:07:04.350  		
00:07:04.350  		'
00:07:04.350   13:33:03 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT
00:07:04.350   13:33:03 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3141447
00:07:04.350   13:33:03 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3141447
00:07:04.350   13:33:03 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods
00:07:04.350   13:33:03 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 3141447 ']'
00:07:04.350   13:33:03 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:04.350   13:33:03 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:04.350   13:33:03 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:04.350  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:04.350   13:33:03 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:04.350   13:33:03 app_cmdline -- common/autotest_common.sh@10 -- # set +x
00:07:04.350  [2024-12-14 13:33:03.938730] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:07:04.350  [2024-12-14 13:33:03.938829] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3141447 ]
00:07:04.350  [2024-12-14 13:33:04.071685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:04.610  [2024-12-14 13:33:04.171248] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:07:05.178   13:33:04 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:05.178   13:33:04 app_cmdline -- common/autotest_common.sh@868 -- # return 0
00:07:05.178   13:33:04 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py spdk_get_version
00:07:05.437  {
00:07:05.437    "version": "SPDK v25.01-pre git sha1 e01cb43b8",
00:07:05.437    "fields": {
00:07:05.437      "major": 25,
00:07:05.437      "minor": 1,
00:07:05.437      "patch": 0,
00:07:05.437      "suffix": "-pre",
00:07:05.437      "commit": "e01cb43b8"
00:07:05.437    }
00:07:05.437  }
00:07:05.437   13:33:05 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=()
00:07:05.437   13:33:05 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods")
00:07:05.437   13:33:05 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version")
00:07:05.437   13:33:05 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort))
00:07:05.437    13:33:05 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods
00:07:05.437    13:33:05 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]'
00:07:05.437    13:33:05 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:05.437    13:33:05 app_cmdline -- app/cmdline.sh@26 -- # sort
00:07:05.437    13:33:05 app_cmdline -- common/autotest_common.sh@10 -- # set +x
00:07:05.437    13:33:05 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:05.437   13:33:05 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 ))
00:07:05.437   13:33:05 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]]
00:07:05.437   13:33:05 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:07:05.437   13:33:05 app_cmdline -- common/autotest_common.sh@652 -- # local es=0
00:07:05.437   13:33:05 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:07:05.437   13:33:05 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py
00:07:05.437   13:33:05 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:07:05.437    13:33:05 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py
00:07:05.437   13:33:05 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:07:05.437    13:33:05 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py
00:07:05.437   13:33:05 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:07:05.437   13:33:05 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py
00:07:05.437   13:33:05 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]]
00:07:05.437   13:33:05 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:07:05.696  request:
00:07:05.696  {
00:07:05.696    "method": "env_dpdk_get_mem_stats",
00:07:05.696    "req_id": 1
00:07:05.696  }
00:07:05.696  Got JSON-RPC error response
00:07:05.696  response:
00:07:05.696  {
00:07:05.696    "code": -32601,
00:07:05.696    "message": "Method not found"
00:07:05.696  }
00:07:05.696   13:33:05 app_cmdline -- common/autotest_common.sh@655 -- # es=1
00:07:05.696   13:33:05 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:07:05.696   13:33:05 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:07:05.696   13:33:05 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:07:05.696   13:33:05 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3141447
00:07:05.696   13:33:05 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 3141447 ']'
00:07:05.696   13:33:05 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 3141447
00:07:05.696    13:33:05 app_cmdline -- common/autotest_common.sh@959 -- # uname
00:07:05.696   13:33:05 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:07:05.696    13:33:05 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3141447
00:07:05.696   13:33:05 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:07:05.696   13:33:05 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:07:05.696   13:33:05 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3141447'
00:07:05.696  killing process with pid 3141447
00:07:05.696   13:33:05 app_cmdline -- common/autotest_common.sh@973 -- # kill 3141447
00:07:05.696   13:33:05 app_cmdline -- common/autotest_common.sh@978 -- # wait 3141447
00:07:08.233  
00:07:08.233  real	0m3.927s
00:07:08.233  user	0m4.093s
00:07:08.233  sys	0m0.670s
00:07:08.233   13:33:07 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:08.233   13:33:07 app_cmdline -- common/autotest_common.sh@10 -- # set +x
00:07:08.233  ************************************
00:07:08.233  END TEST app_cmdline
00:07:08.233  ************************************
00:07:08.233   13:33:07  -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh
00:07:08.233   13:33:07  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:08.233   13:33:07  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:08.233   13:33:07  -- common/autotest_common.sh@10 -- # set +x
00:07:08.233  ************************************
00:07:08.233  START TEST version
00:07:08.233  ************************************
00:07:08.233   13:33:07 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh
00:07:08.233  * Looking for test storage...
00:07:08.233  * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app
00:07:08.233    13:33:07 version -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:07:08.233     13:33:07 version -- common/autotest_common.sh@1711 -- # lcov --version
00:07:08.233     13:33:07 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:07:08.233    13:33:07 version -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:07:08.233    13:33:07 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:08.233    13:33:07 version -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:08.233    13:33:07 version -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:08.233    13:33:07 version -- scripts/common.sh@336 -- # IFS=.-:
00:07:08.233    13:33:07 version -- scripts/common.sh@336 -- # read -ra ver1
00:07:08.233    13:33:07 version -- scripts/common.sh@337 -- # IFS=.-:
00:07:08.233    13:33:07 version -- scripts/common.sh@337 -- # read -ra ver2
00:07:08.233    13:33:07 version -- scripts/common.sh@338 -- # local 'op=<'
00:07:08.233    13:33:07 version -- scripts/common.sh@340 -- # ver1_l=2
00:07:08.233    13:33:07 version -- scripts/common.sh@341 -- # ver2_l=1
00:07:08.233    13:33:07 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:08.233    13:33:07 version -- scripts/common.sh@344 -- # case "$op" in
00:07:08.233    13:33:07 version -- scripts/common.sh@345 -- # : 1
00:07:08.233    13:33:07 version -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:08.233    13:33:07 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:08.233     13:33:07 version -- scripts/common.sh@365 -- # decimal 1
00:07:08.233     13:33:07 version -- scripts/common.sh@353 -- # local d=1
00:07:08.233     13:33:07 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:08.233     13:33:07 version -- scripts/common.sh@355 -- # echo 1
00:07:08.233    13:33:07 version -- scripts/common.sh@365 -- # ver1[v]=1
00:07:08.233     13:33:07 version -- scripts/common.sh@366 -- # decimal 2
00:07:08.233     13:33:07 version -- scripts/common.sh@353 -- # local d=2
00:07:08.233     13:33:07 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:08.233     13:33:07 version -- scripts/common.sh@355 -- # echo 2
00:07:08.233    13:33:07 version -- scripts/common.sh@366 -- # ver2[v]=2
00:07:08.233    13:33:07 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:08.233    13:33:07 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:08.233    13:33:07 version -- scripts/common.sh@368 -- # return 0
00:07:08.233    13:33:07 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:08.233    13:33:07 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:07:08.233  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:08.233  		--rc genhtml_branch_coverage=1
00:07:08.233  		--rc genhtml_function_coverage=1
00:07:08.233  		--rc genhtml_legend=1
00:07:08.233  		--rc geninfo_all_blocks=1
00:07:08.233  		--rc geninfo_unexecuted_blocks=1
00:07:08.233  		
00:07:08.233  		'
00:07:08.233    13:33:07 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:07:08.233  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:08.233  		--rc genhtml_branch_coverage=1
00:07:08.233  		--rc genhtml_function_coverage=1
00:07:08.233  		--rc genhtml_legend=1
00:07:08.233  		--rc geninfo_all_blocks=1
00:07:08.233  		--rc geninfo_unexecuted_blocks=1
00:07:08.233  		
00:07:08.233  		'
00:07:08.233    13:33:07 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:07:08.233  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:08.233  		--rc genhtml_branch_coverage=1
00:07:08.233  		--rc genhtml_function_coverage=1
00:07:08.233  		--rc genhtml_legend=1
00:07:08.233  		--rc geninfo_all_blocks=1
00:07:08.233  		--rc geninfo_unexecuted_blocks=1
00:07:08.233  		
00:07:08.233  		'
00:07:08.233    13:33:07 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:07:08.233  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:08.233  		--rc genhtml_branch_coverage=1
00:07:08.233  		--rc genhtml_function_coverage=1
00:07:08.233  		--rc genhtml_legend=1
00:07:08.233  		--rc geninfo_all_blocks=1
00:07:08.233  		--rc geninfo_unexecuted_blocks=1
00:07:08.233  		
00:07:08.233  		'
00:07:08.233    13:33:07 version -- app/version.sh@17 -- # get_header_version major
00:07:08.233    13:33:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h
00:07:08.233    13:33:07 version -- app/version.sh@14 -- # cut -f2
00:07:08.233    13:33:07 version -- app/version.sh@14 -- # tr -d '"'
00:07:08.233   13:33:07 version -- app/version.sh@17 -- # major=25
00:07:08.233    13:33:07 version -- app/version.sh@18 -- # get_header_version minor
00:07:08.233    13:33:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h
00:07:08.233    13:33:07 version -- app/version.sh@14 -- # cut -f2
00:07:08.233    13:33:07 version -- app/version.sh@14 -- # tr -d '"'
00:07:08.233   13:33:07 version -- app/version.sh@18 -- # minor=1
00:07:08.233    13:33:07 version -- app/version.sh@19 -- # get_header_version patch
00:07:08.233    13:33:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h
00:07:08.233    13:33:07 version -- app/version.sh@14 -- # cut -f2
00:07:08.233    13:33:07 version -- app/version.sh@14 -- # tr -d '"'
00:07:08.233   13:33:07 version -- app/version.sh@19 -- # patch=0
00:07:08.233    13:33:07 version -- app/version.sh@20 -- # get_header_version suffix
00:07:08.233    13:33:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h
00:07:08.233    13:33:07 version -- app/version.sh@14 -- # tr -d '"'
00:07:08.233    13:33:07 version -- app/version.sh@14 -- # cut -f2
00:07:08.233   13:33:07 version -- app/version.sh@20 -- # suffix=-pre
00:07:08.233   13:33:07 version -- app/version.sh@22 -- # version=25.1
00:07:08.233   13:33:07 version -- app/version.sh@25 -- # (( patch != 0 ))
00:07:08.233   13:33:07 version -- app/version.sh@28 -- # version=25.1rc0
00:07:08.233   13:33:07 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python
00:07:08.233    13:33:07 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)'
00:07:08.233   13:33:07 version -- app/version.sh@30 -- # py_version=25.1rc0
00:07:08.233   13:33:07 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]]
00:07:08.233  
00:07:08.233  real	0m0.272s
00:07:08.233  user	0m0.156s
00:07:08.233  sys	0m0.170s
00:07:08.233   13:33:07 version -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:08.233   13:33:07 version -- common/autotest_common.sh@10 -- # set +x
00:07:08.233  ************************************
00:07:08.233  END TEST version
00:07:08.233  ************************************
00:07:08.493   13:33:07  -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']'
00:07:08.493   13:33:07  -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]]
00:07:08.493    13:33:07  -- spdk/autotest.sh@194 -- # uname -s
00:07:08.493   13:33:07  -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]]
00:07:08.493   13:33:07  -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]]
00:07:08.493   13:33:07  -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]]
00:07:08.493   13:33:07  -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']'
00:07:08.493   13:33:07  -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']'
00:07:08.493   13:33:07  -- spdk/autotest.sh@260 -- # timing_exit lib
00:07:08.493   13:33:07  -- common/autotest_common.sh@732 -- # xtrace_disable
00:07:08.493   13:33:07  -- common/autotest_common.sh@10 -- # set +x
00:07:08.493   13:33:08  -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']'
00:07:08.493   13:33:08  -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']'
00:07:08.493   13:33:08  -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']'
00:07:08.493   13:33:08  -- spdk/autotest.sh@277 -- # export NET_TYPE
00:07:08.493   13:33:08  -- spdk/autotest.sh@280 -- # '[' rdma = rdma ']'
00:07:08.493   13:33:08  -- spdk/autotest.sh@281 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma
00:07:08.493   13:33:08  -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:07:08.493   13:33:08  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:08.493   13:33:08  -- common/autotest_common.sh@10 -- # set +x
00:07:08.493  ************************************
00:07:08.493  START TEST nvmf_rdma
00:07:08.493  ************************************
00:07:08.493   13:33:08 nvmf_rdma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma
00:07:08.493  * Looking for test storage...
00:07:08.493  * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf
00:07:08.493    13:33:08 nvmf_rdma -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:07:08.493     13:33:08 nvmf_rdma -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:07:08.493     13:33:08 nvmf_rdma -- common/autotest_common.sh@1711 -- # lcov --version
00:07:08.493    13:33:08 nvmf_rdma -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:07:08.493    13:33:08 nvmf_rdma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:08.493    13:33:08 nvmf_rdma -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:08.493    13:33:08 nvmf_rdma -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:08.493    13:33:08 nvmf_rdma -- scripts/common.sh@336 -- # IFS=.-:
00:07:08.493    13:33:08 nvmf_rdma -- scripts/common.sh@336 -- # read -ra ver1
00:07:08.493    13:33:08 nvmf_rdma -- scripts/common.sh@337 -- # IFS=.-:
00:07:08.493    13:33:08 nvmf_rdma -- scripts/common.sh@337 -- # read -ra ver2
00:07:08.493    13:33:08 nvmf_rdma -- scripts/common.sh@338 -- # local 'op=<'
00:07:08.493    13:33:08 nvmf_rdma -- scripts/common.sh@340 -- # ver1_l=2
00:07:08.753    13:33:08 nvmf_rdma -- scripts/common.sh@341 -- # ver2_l=1
00:07:08.753    13:33:08 nvmf_rdma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:08.753    13:33:08 nvmf_rdma -- scripts/common.sh@344 -- # case "$op" in
00:07:08.753    13:33:08 nvmf_rdma -- scripts/common.sh@345 -- # : 1
00:07:08.753    13:33:08 nvmf_rdma -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:08.753    13:33:08 nvmf_rdma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:08.753     13:33:08 nvmf_rdma -- scripts/common.sh@365 -- # decimal 1
00:07:08.753     13:33:08 nvmf_rdma -- scripts/common.sh@353 -- # local d=1
00:07:08.753     13:33:08 nvmf_rdma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:08.753     13:33:08 nvmf_rdma -- scripts/common.sh@355 -- # echo 1
00:07:08.753    13:33:08 nvmf_rdma -- scripts/common.sh@365 -- # ver1[v]=1
00:07:08.753     13:33:08 nvmf_rdma -- scripts/common.sh@366 -- # decimal 2
00:07:08.754     13:33:08 nvmf_rdma -- scripts/common.sh@353 -- # local d=2
00:07:08.754     13:33:08 nvmf_rdma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:08.754     13:33:08 nvmf_rdma -- scripts/common.sh@355 -- # echo 2
00:07:08.754    13:33:08 nvmf_rdma -- scripts/common.sh@366 -- # ver2[v]=2
00:07:08.754    13:33:08 nvmf_rdma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:08.754    13:33:08 nvmf_rdma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:08.754    13:33:08 nvmf_rdma -- scripts/common.sh@368 -- # return 0
00:07:08.754    13:33:08 nvmf_rdma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:08.754    13:33:08 nvmf_rdma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:07:08.754  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:08.754  		--rc genhtml_branch_coverage=1
00:07:08.754  		--rc genhtml_function_coverage=1
00:07:08.754  		--rc genhtml_legend=1
00:07:08.754  		--rc geninfo_all_blocks=1
00:07:08.754  		--rc geninfo_unexecuted_blocks=1
00:07:08.754  		
00:07:08.754  		'
00:07:08.754    13:33:08 nvmf_rdma -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:07:08.754  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:08.754  		--rc genhtml_branch_coverage=1
00:07:08.754  		--rc genhtml_function_coverage=1
00:07:08.754  		--rc genhtml_legend=1
00:07:08.754  		--rc geninfo_all_blocks=1
00:07:08.754  		--rc geninfo_unexecuted_blocks=1
00:07:08.754  		
00:07:08.754  		'
00:07:08.754    13:33:08 nvmf_rdma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:07:08.754  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:08.754  		--rc genhtml_branch_coverage=1
00:07:08.754  		--rc genhtml_function_coverage=1
00:07:08.754  		--rc genhtml_legend=1
00:07:08.754  		--rc geninfo_all_blocks=1
00:07:08.754  		--rc geninfo_unexecuted_blocks=1
00:07:08.754  		
00:07:08.754  		'
00:07:08.754    13:33:08 nvmf_rdma -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:07:08.754  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:08.754  		--rc genhtml_branch_coverage=1
00:07:08.754  		--rc genhtml_function_coverage=1
00:07:08.754  		--rc genhtml_legend=1
00:07:08.754  		--rc geninfo_all_blocks=1
00:07:08.754  		--rc geninfo_unexecuted_blocks=1
00:07:08.754  		
00:07:08.754  		'
00:07:08.754    13:33:08 nvmf_rdma -- nvmf/nvmf.sh@10 -- # uname -s
00:07:08.754   13:33:08 nvmf_rdma -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']'
00:07:08.754   13:33:08 nvmf_rdma -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma
00:07:08.754   13:33:08 nvmf_rdma -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:07:08.754   13:33:08 nvmf_rdma -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:08.754   13:33:08 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x
00:07:08.754  ************************************
00:07:08.754  START TEST nvmf_target_core
00:07:08.754  ************************************
00:07:08.754   13:33:08 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma
00:07:08.754  * Looking for test storage...
00:07:08.754  * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf
00:07:08.754    13:33:08 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:07:08.754     13:33:08 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version
00:07:08.754     13:33:08 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:07:08.754    13:33:08 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:07:08.754    13:33:08 nvmf_rdma.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:08.754    13:33:08 nvmf_rdma.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:08.754    13:33:08 nvmf_rdma.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:08.754    13:33:08 nvmf_rdma.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-:
00:07:08.754    13:33:08 nvmf_rdma.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1
00:07:08.754    13:33:08 nvmf_rdma.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-:
00:07:08.754    13:33:08 nvmf_rdma.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2
00:07:08.754    13:33:08 nvmf_rdma.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<'
00:07:08.754    13:33:08 nvmf_rdma.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2
00:07:08.754    13:33:08 nvmf_rdma.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1
00:07:08.754    13:33:08 nvmf_rdma.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:08.754    13:33:08 nvmf_rdma.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in
00:07:08.754    13:33:08 nvmf_rdma.nvmf_target_core -- scripts/common.sh@345 -- # : 1
00:07:08.754    13:33:08 nvmf_rdma.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:08.754    13:33:08 nvmf_rdma.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:08.754     13:33:08 nvmf_rdma.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1
00:07:08.754     13:33:08 nvmf_rdma.nvmf_target_core -- scripts/common.sh@353 -- # local d=1
00:07:08.754     13:33:08 nvmf_rdma.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:08.754     13:33:08 nvmf_rdma.nvmf_target_core -- scripts/common.sh@355 -- # echo 1
00:07:08.754    13:33:08 nvmf_rdma.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1
00:07:08.754     13:33:08 nvmf_rdma.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2
00:07:08.754     13:33:08 nvmf_rdma.nvmf_target_core -- scripts/common.sh@353 -- # local d=2
00:07:08.754     13:33:08 nvmf_rdma.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:08.754     13:33:08 nvmf_rdma.nvmf_target_core -- scripts/common.sh@355 -- # echo 2
00:07:08.754    13:33:08 nvmf_rdma.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2
00:07:08.754    13:33:08 nvmf_rdma.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:08.754    13:33:08 nvmf_rdma.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:08.754    13:33:08 nvmf_rdma.nvmf_target_core -- scripts/common.sh@368 -- # return 0
00:07:08.754    13:33:08 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:08.754    13:33:08 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:07:08.754  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:08.754  		--rc genhtml_branch_coverage=1
00:07:08.754  		--rc genhtml_function_coverage=1
00:07:08.754  		--rc genhtml_legend=1
00:07:08.754  		--rc geninfo_all_blocks=1
00:07:08.754  		--rc geninfo_unexecuted_blocks=1
00:07:08.754  		
00:07:08.754  		'
00:07:08.754    13:33:08 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:07:08.754  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:08.754  		--rc genhtml_branch_coverage=1
00:07:08.754  		--rc genhtml_function_coverage=1
00:07:08.754  		--rc genhtml_legend=1
00:07:08.754  		--rc geninfo_all_blocks=1
00:07:08.754  		--rc geninfo_unexecuted_blocks=1
00:07:08.754  		
00:07:08.754  		'
00:07:08.754    13:33:08 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:07:08.754  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:08.754  		--rc genhtml_branch_coverage=1
00:07:08.754  		--rc genhtml_function_coverage=1
00:07:08.754  		--rc genhtml_legend=1
00:07:08.754  		--rc geninfo_all_blocks=1
00:07:08.754  		--rc geninfo_unexecuted_blocks=1
00:07:08.754  		
00:07:08.754  		'
00:07:08.754    13:33:08 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:07:08.754  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:08.754  		--rc genhtml_branch_coverage=1
00:07:08.754  		--rc genhtml_function_coverage=1
00:07:08.754  		--rc genhtml_legend=1
00:07:08.754  		--rc geninfo_all_blocks=1
00:07:08.754  		--rc geninfo_unexecuted_blocks=1
00:07:08.754  		
00:07:08.754  		'
00:07:08.754    13:33:08 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s
00:07:08.754   13:33:08 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']'
00:07:08.754   13:33:08 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh
00:07:08.754     13:33:08 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s
00:07:08.754    13:33:08 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:07:08.754    13:33:08 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:07:08.754    13:33:08 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:07:08.754    13:33:08 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:07:08.754    13:33:08 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:07:08.754    13:33:08 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:07:08.754    13:33:08 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:07:08.754    13:33:08 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:07:08.754    13:33:08 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:07:08.754     13:33:08 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:07:08.754    13:33:08 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:07:08.754    13:33:08 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e
00:07:08.754    13:33:08 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:07:08.754    13:33:08 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:07:08.754    13:33:08 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:07:08.754    13:33:08 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:07:08.754    13:33:08 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh
00:07:08.754     13:33:08 nvmf_rdma.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob
00:07:08.754     13:33:08 nvmf_rdma.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:07:08.754     13:33:08 nvmf_rdma.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:07:08.754     13:33:08 nvmf_rdma.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:07:08.754      13:33:08 nvmf_rdma.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:08.754      13:33:08 nvmf_rdma.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:08.755      13:33:08 nvmf_rdma.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:08.755      13:33:08 nvmf_rdma.nvmf_target_core -- paths/export.sh@5 -- # export PATH
00:07:08.755      13:33:08 nvmf_rdma.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:08.755    13:33:08 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@51 -- # : 0
00:07:08.755    13:33:08 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:07:08.755    13:33:08 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:07:08.755    13:33:08 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:07:08.755    13:33:08 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:07:08.755    13:33:08 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:07:08.755    13:33:08 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:07:08.755  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:07:08.755    13:33:08 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:07:08.755    13:33:08 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:07:08.755    13:33:08 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0
00:07:08.755   13:33:08 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT
00:07:08.755   13:33:08 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@")
00:07:08.755   13:33:08 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]]
00:07:08.755   13:33:08 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma
00:07:08.755   13:33:08 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:07:08.755   13:33:08 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:08.755   13:33:08 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x
00:07:09.015  ************************************
00:07:09.015  START TEST nvmf_abort
00:07:09.015  ************************************
00:07:09.015   13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma
00:07:09.015  * Looking for test storage...
00:07:09.015  * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target
00:07:09.015    13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:07:09.015     13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version
00:07:09.015     13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:07:09.015    13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:07:09.015    13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:09.015    13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:09.015    13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:09.015    13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-:
00:07:09.015    13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1
00:07:09.015    13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-:
00:07:09.015    13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2
00:07:09.015    13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<'
00:07:09.015    13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2
00:07:09.015    13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1
00:07:09.015    13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:09.015    13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in
00:07:09.015    13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1
00:07:09.015    13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:09.015    13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:09.015     13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1
00:07:09.015     13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1
00:07:09.015     13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:09.015     13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1
00:07:09.015    13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1
00:07:09.015     13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2
00:07:09.015     13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2
00:07:09.015     13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:09.015     13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2
00:07:09.015    13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2
00:07:09.015    13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:09.015    13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:09.015    13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0
00:07:09.015    13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:09.015    13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:07:09.015  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:09.015  		--rc genhtml_branch_coverage=1
00:07:09.015  		--rc genhtml_function_coverage=1
00:07:09.015  		--rc genhtml_legend=1
00:07:09.015  		--rc geninfo_all_blocks=1
00:07:09.015  		--rc geninfo_unexecuted_blocks=1
00:07:09.015  		
00:07:09.015  		'
00:07:09.015    13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:07:09.015  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:09.015  		--rc genhtml_branch_coverage=1
00:07:09.015  		--rc genhtml_function_coverage=1
00:07:09.015  		--rc genhtml_legend=1
00:07:09.015  		--rc geninfo_all_blocks=1
00:07:09.015  		--rc geninfo_unexecuted_blocks=1
00:07:09.015  		
00:07:09.015  		'
00:07:09.015    13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:07:09.015  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:09.015  		--rc genhtml_branch_coverage=1
00:07:09.015  		--rc genhtml_function_coverage=1
00:07:09.015  		--rc genhtml_legend=1
00:07:09.015  		--rc geninfo_all_blocks=1
00:07:09.015  		--rc geninfo_unexecuted_blocks=1
00:07:09.015  		
00:07:09.015  		'
00:07:09.015    13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:07:09.015  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:09.015  		--rc genhtml_branch_coverage=1
00:07:09.015  		--rc genhtml_function_coverage=1
00:07:09.015  		--rc genhtml_legend=1
00:07:09.015  		--rc geninfo_all_blocks=1
00:07:09.015  		--rc geninfo_unexecuted_blocks=1
00:07:09.015  		
00:07:09.015  		'
00:07:09.015   13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh
00:07:09.015     13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s
00:07:09.015    13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:07:09.015    13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:07:09.015    13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:07:09.015    13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:07:09.015    13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:07:09.015    13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:07:09.015    13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:07:09.015    13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:07:09.015    13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:07:09.015     13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:07:09.015    13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:07:09.015    13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e
00:07:09.015    13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:07:09.016    13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:07:09.016    13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:07:09.016    13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:07:09.016    13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh
00:07:09.016     13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob
00:07:09.016     13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:07:09.016     13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:07:09.016     13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:07:09.016      13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:09.016      13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:09.016      13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:09.016      13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH
00:07:09.016      13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:09.016    13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0
00:07:09.016    13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:07:09.016    13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:07:09.016    13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:07:09.016    13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:07:09.016    13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:07:09.016    13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:07:09.016  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:07:09.016    13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:07:09.016    13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:07:09.016    13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0
00:07:09.016   13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64
00:07:09.016   13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096
00:07:09.016   13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit
00:07:09.016   13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z rdma ']'
00:07:09.016   13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:07:09.016   13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs
00:07:09.016   13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no
00:07:09.016   13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns
00:07:09.016   13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:07:09.016   13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:07:09.016    13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:07:09.016   13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:07:09.275   13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:07:09.275   13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable
00:07:09.275   13:33:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:07:15.847   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:07:15.847   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=()
00:07:15.847   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs
00:07:15.847   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=()
00:07:15.847   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:07:15.847   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=()
00:07:15.847   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers
00:07:15.847   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=()
00:07:15.847   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs
00:07:15.847   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=()
00:07:15.847   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810
00:07:15.847   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=()
00:07:15.847   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722
00:07:15.847   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=()
00:07:15.847   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx
00:07:15.847   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:07:15.847   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:07:15.847   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:07:15.847   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:07:15.847   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:07:15.847   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:07:15.847   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:07:15.847   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:07:15.847   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:07:15.847   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:07:15.847   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:07:15.847   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:07:15.847   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:07:15.847   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ rdma == rdma ]]
00:07:15.847   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}")
00:07:15.847   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}")
00:07:15.848   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]]
00:07:15.848   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}")
00:07:15.848   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:07:15.848   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:07:15.848   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)'
00:07:15.848  Found 0000:d9:00.0 (0x15b3 - 0x1015)
00:07:15.848   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:07:15.848   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:07:15.848   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:07:15.848   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:07:15.848   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:07:15.848   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:07:15.848   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:07:15.848   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)'
00:07:15.848  Found 0000:d9:00.1 (0x15b3 - 0x1015)
00:07:15.848   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:07:15.848   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:07:15.848   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:07:15.848   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:07:15.848   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:07:15.848   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:07:15.848   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:07:15.848   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]]
00:07:15.848   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:07:15.848   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:07:15.848   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:07:15.848   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:07:15.848   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:07:15.848   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0'
00:07:15.848  Found net devices under 0000:d9:00.0: mlx_0_0
00:07:15.848   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:07:15.848   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:07:15.848   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:07:15.848   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:07:15.848   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:07:15.848   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:07:15.848   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1'
00:07:15.848  Found net devices under 0000:d9:00.1: mlx_0_1
00:07:15.848   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:07:15.848   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:07:15.848   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes
00:07:15.848   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:07:15.848   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ rdma == tcp ]]
00:07:15.848   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@447 -- # [[ rdma == rdma ]]
00:07:15.848   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # rdma_device_init
00:07:15.848   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@529 -- # load_ib_rdma_modules
00:07:15.848    13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@62 -- # uname
00:07:15.848   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']'
00:07:15.848   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@66 -- # modprobe ib_cm
00:07:15.848   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@67 -- # modprobe ib_core
00:07:15.848   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@68 -- # modprobe ib_umad
00:07:15.848   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@69 -- # modprobe ib_uverbs
00:07:15.848   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@70 -- # modprobe iw_cm
00:07:15.848   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@71 -- # modprobe rdma_cm
00:07:15.848   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@72 -- # modprobe rdma_ucm
00:07:15.848   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@530 -- # allocate_nic_ips
00:07:15.848   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR ))
00:07:15.848    13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # get_rdma_if_list
00:07:15.848    13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:07:15.848    13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:07:15.848     13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:07:15.848     13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:07:15.848    13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:07:15.848    13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:07:15.848    13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:07:15.848    13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:07:15.848    13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_0
00:07:15.848    13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2
00:07:15.848    13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:07:15.848    13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:07:15.848    13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:07:15.848    13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:07:15.848    13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:07:15.848    13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_1
00:07:15.848    13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2
00:07:15.848   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:07:15.848    13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0
00:07:15.848    13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:07:15.848    13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:07:15.848    13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}'
00:07:15.848    13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1
00:07:15.848   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # ip=192.168.100.8
00:07:15.848   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]]
00:07:15.848   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@85 -- # ip addr show mlx_0_0
00:07:15.848  6: mlx_0_0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:07:15.848      link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff
00:07:15.848      altname enp217s0f0np0
00:07:15.848      altname ens818f0np0
00:07:15.848      inet 192.168.100.8/24 scope global mlx_0_0
00:07:15.848         valid_lft forever preferred_lft forever
00:07:15.848   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:07:15.848    13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1
00:07:15.848    13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:07:15.848    13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:07:15.848    13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1
00:07:15.848    13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}'
00:07:15.848   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # ip=192.168.100.9
00:07:15.848   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]]
00:07:15.848   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@85 -- # ip addr show mlx_0_1
00:07:15.848  7: mlx_0_1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:07:15.848      link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff
00:07:15.848      altname enp217s0f1np1
00:07:15.848      altname ens818f1np1
00:07:15.848      inet 192.168.100.9/24 scope global mlx_0_1
00:07:15.848         valid_lft forever preferred_lft forever
00:07:15.848   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0
00:07:15.848   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:07:15.848   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma'
00:07:15.848   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]]
00:07:15.848    13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # get_available_rdma_ips
00:07:15.848     13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # get_rdma_if_list
00:07:15.848     13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:07:15.848     13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:07:15.849      13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:07:15.849      13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:07:15.849     13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:07:15.849     13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:07:15.849     13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:07:15.849     13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:07:15.849     13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_0
00:07:15.849     13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2
00:07:15.849     13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:07:15.849     13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:07:15.849     13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:07:15.849     13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:07:15.849     13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:07:15.849     13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_1
00:07:15.849     13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2
00:07:15.849    13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:07:15.849    13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0
00:07:15.849    13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:07:15.849    13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:07:15.849    13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}'
00:07:15.849    13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1
00:07:15.849    13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:07:15.849    13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1
00:07:15.849    13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:07:15.849    13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:07:15.849    13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1
00:07:15.849    13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}'
00:07:15.849   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8
00:07:15.849  192.168.100.9'
00:07:15.849    13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # echo '192.168.100.8
00:07:15.849  192.168.100.9'
00:07:15.849    13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # head -n 1
00:07:15.849   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8
00:07:15.849    13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # echo '192.168.100.8
00:07:15.849  192.168.100.9'
00:07:15.849    13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # tail -n +2
00:07:15.849    13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # head -n 1
00:07:15.849   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9
00:07:15.849   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']'
00:07:15.849   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024'
00:07:15.849   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' rdma == tcp ']'
00:07:15.849   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' rdma == rdma ']'
00:07:15.849   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-rdma
00:07:15.849   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE
00:07:15.849   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:07:15.849   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable
00:07:15.849   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:07:15.849   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3146115
00:07:15.849   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE
00:07:15.849   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3146115
00:07:15.849   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 3146115 ']'
00:07:15.849   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:15.849   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:15.849   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:15.849  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:15.849   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:15.849   13:33:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:07:15.849  [2024-12-14 13:33:15.579899] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:07:15.849  [2024-12-14 13:33:15.580009] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:07:16.109  [2024-12-14 13:33:15.717035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:07:16.109  [2024-12-14 13:33:15.820460] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:07:16.109  [2024-12-14 13:33:15.820515] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:07:16.109  [2024-12-14 13:33:15.820532] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:07:16.109  [2024-12-14 13:33:15.820551] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:07:16.109  [2024-12-14 13:33:15.820564] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:07:16.109  [2024-12-14 13:33:15.823047] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:07:16.109  [2024-12-14 13:33:15.823114] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:07:16.109  [2024-12-14 13:33:15.823121] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:07:16.677   13:33:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:16.677   13:33:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0
00:07:16.677   13:33:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:07:16.677   13:33:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable
00:07:16.677   13:33:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:07:16.936   13:33:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:07:16.936   13:33:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256
00:07:16.936   13:33:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:16.936   13:33:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:07:16.936  [2024-12-14 13:33:16.474903] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028540/0x7f63eb7bd940) succeed.
00:07:16.936  [2024-12-14 13:33:16.492804] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000286c0/0x7f63eb779940) succeed.
00:07:17.195   13:33:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:17.195   13:33:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0
00:07:17.195   13:33:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:17.195   13:33:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:07:17.195  Malloc0
00:07:17.195   13:33:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:17.195   13:33:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000
00:07:17.195   13:33:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:17.195   13:33:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:07:17.195  Delay0
00:07:17.195   13:33:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:17.195   13:33:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0
00:07:17.195   13:33:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:17.195   13:33:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:07:17.195   13:33:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:17.195   13:33:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0
00:07:17.195   13:33:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:17.195   13:33:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:07:17.195   13:33:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:17.195   13:33:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420
00:07:17.195   13:33:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:17.195   13:33:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:07:17.195  [2024-12-14 13:33:16.817546] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 ***
00:07:17.195   13:33:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:17.195   13:33:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420
00:07:17.195   13:33:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:17.195   13:33:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:07:17.195   13:33:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:17.195   13:33:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128
00:07:17.455  [2024-12-14 13:33:16.967026] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral
00:07:19.992  Initializing NVMe Controllers
00:07:19.992  Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0
00:07:19.992  controller IO queue size 128 less than required
00:07:19.992  Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver.
00:07:19.992  Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0
00:07:19.992  Initialization complete. Launching workers.
00:07:19.992  NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37395
00:07:19.992  CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37456, failed to submit 62
00:07:19.992  	 success 37398, unsuccessful 58, failed 0
00:07:19.992   13:33:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0
00:07:19.992   13:33:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:19.992   13:33:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:07:19.992   13:33:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:19.992   13:33:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT
00:07:19.992   13:33:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini
00:07:19.992   13:33:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup
00:07:19.992   13:33:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync
00:07:19.992   13:33:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' rdma == tcp ']'
00:07:19.992   13:33:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' rdma == rdma ']'
00:07:19.992   13:33:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e
00:07:19.992   13:33:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20}
00:07:19.992   13:33:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma
00:07:19.992  rmmod nvme_rdma
00:07:19.992  rmmod nvme_fabrics
00:07:19.992   13:33:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:07:19.992   13:33:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e
00:07:19.992   13:33:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0
00:07:19.992   13:33:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3146115 ']'
00:07:19.992   13:33:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3146115
00:07:19.992   13:33:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 3146115 ']'
00:07:19.992   13:33:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 3146115
00:07:19.992    13:33:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname
00:07:19.992   13:33:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:07:19.992    13:33:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3146115
00:07:19.992   13:33:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:07:19.992   13:33:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:07:19.992   13:33:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3146115'
00:07:19.992  killing process with pid 3146115
00:07:19.992   13:33:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 3146115
00:07:19.992   13:33:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 3146115
00:07:21.372   13:33:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:07:21.372   13:33:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]]
00:07:21.372  
00:07:21.372  real	0m12.462s
00:07:21.372  user	0m18.699s
00:07:21.372  sys	0m5.874s
00:07:21.372   13:33:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:21.372   13:33:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:07:21.372  ************************************
00:07:21.372  END TEST nvmf_abort
00:07:21.372  ************************************
00:07:21.372   13:33:21 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma
00:07:21.372   13:33:21 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:07:21.372   13:33:21 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:21.372   13:33:21 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x
00:07:21.372  ************************************
00:07:21.372  START TEST nvmf_ns_hotplug_stress
00:07:21.372  ************************************
00:07:21.372   13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma
00:07:21.632  * Looking for test storage...
00:07:21.632  * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target
00:07:21.632    13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:07:21.632     13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version
00:07:21.632     13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:07:21.632    13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:07:21.632    13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:21.632    13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:21.632    13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:21.632    13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-:
00:07:21.632    13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1
00:07:21.632    13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-:
00:07:21.632    13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2
00:07:21.632    13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<'
00:07:21.632    13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2
00:07:21.632    13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1
00:07:21.632    13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:21.632    13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in
00:07:21.632    13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1
00:07:21.632    13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:21.632    13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:21.632     13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1
00:07:21.632     13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1
00:07:21.632     13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:21.632     13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1
00:07:21.632    13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1
00:07:21.632     13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2
00:07:21.632     13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2
00:07:21.632     13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:21.632     13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2
00:07:21.632    13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2
00:07:21.632    13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:21.633    13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:21.633    13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0
00:07:21.633    13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:21.633    13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:07:21.633  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:21.633  		--rc genhtml_branch_coverage=1
00:07:21.633  		--rc genhtml_function_coverage=1
00:07:21.633  		--rc genhtml_legend=1
00:07:21.633  		--rc geninfo_all_blocks=1
00:07:21.633  		--rc geninfo_unexecuted_blocks=1
00:07:21.633  		
00:07:21.633  		'
00:07:21.633    13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:07:21.633  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:21.633  		--rc genhtml_branch_coverage=1
00:07:21.633  		--rc genhtml_function_coverage=1
00:07:21.633  		--rc genhtml_legend=1
00:07:21.633  		--rc geninfo_all_blocks=1
00:07:21.633  		--rc geninfo_unexecuted_blocks=1
00:07:21.633  		
00:07:21.633  		'
00:07:21.633    13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:07:21.633  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:21.633  		--rc genhtml_branch_coverage=1
00:07:21.633  		--rc genhtml_function_coverage=1
00:07:21.633  		--rc genhtml_legend=1
00:07:21.633  		--rc geninfo_all_blocks=1
00:07:21.633  		--rc geninfo_unexecuted_blocks=1
00:07:21.633  		
00:07:21.633  		'
00:07:21.633    13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:07:21.633  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:21.633  		--rc genhtml_branch_coverage=1
00:07:21.633  		--rc genhtml_function_coverage=1
00:07:21.633  		--rc genhtml_legend=1
00:07:21.633  		--rc geninfo_all_blocks=1
00:07:21.633  		--rc geninfo_unexecuted_blocks=1
00:07:21.633  		
00:07:21.633  		'
00:07:21.633   13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh
00:07:21.633     13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s
00:07:21.633    13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:07:21.633    13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:07:21.633    13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:07:21.633    13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:07:21.633    13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:07:21.633    13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:07:21.633    13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:07:21.633    13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:07:21.633    13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:07:21.633     13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:07:21.633    13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:07:21.633    13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e
00:07:21.633    13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:07:21.633    13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:07:21.633    13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:07:21.633    13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:07:21.633    13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh
00:07:21.633     13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob
00:07:21.633     13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:07:21.633     13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:07:21.633     13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:07:21.633      13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:21.633      13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:21.633      13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:21.633      13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH
00:07:21.633      13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:21.633    13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0
00:07:21.633    13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:07:21.633    13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:07:21.633    13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:07:21.633    13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:07:21.633    13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:07:21.633    13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:07:21.633  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:07:21.633    13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:07:21.633    13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:07:21.633    13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0
00:07:21.633   13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py
00:07:21.633   13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit
00:07:21.633   13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z rdma ']'
00:07:21.633   13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:07:21.633   13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs
00:07:21.633   13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no
00:07:21.633   13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns
00:07:21.633   13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:07:21.633   13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:07:21.633    13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:07:21.633   13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:07:21.633   13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:07:21.633   13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable
00:07:21.633   13:33:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x
00:07:28.204   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:07:28.204   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=()
00:07:28.204   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs
00:07:28.204   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=()
00:07:28.204   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:07:28.204   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=()
00:07:28.204   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers
00:07:28.204   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=()
00:07:28.204   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs
00:07:28.204   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=()
00:07:28.204   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810
00:07:28.204   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=()
00:07:28.204   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722
00:07:28.204   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=()
00:07:28.204   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx
00:07:28.204   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:07:28.204   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:07:28.204   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:07:28.204   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:07:28.204   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:07:28.204   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:07:28.204   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:07:28.204   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:07:28.204   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:07:28.204   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:07:28.204   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:07:28.204   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:07:28.204   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:07:28.204   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ rdma == rdma ]]
00:07:28.204   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}")
00:07:28.204   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}")
00:07:28.204   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]]
00:07:28.204   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}")
00:07:28.204   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:07:28.204   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:07:28.204   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)'
00:07:28.204  Found 0000:d9:00.0 (0x15b3 - 0x1015)
00:07:28.204   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:07:28.204   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:07:28.204   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:07:28.204   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:07:28.204   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:07:28.204   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:07:28.204   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:07:28.204   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)'
00:07:28.204  Found 0000:d9:00.1 (0x15b3 - 0x1015)
00:07:28.204   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:07:28.204   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:07:28.204   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:07:28.204   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:07:28.204   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:07:28.204   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:07:28.204   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:07:28.205   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]]
00:07:28.205   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:07:28.205   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:07:28.205   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:07:28.205   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:07:28.205   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:07:28.205   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0'
00:07:28.205  Found net devices under 0000:d9:00.0: mlx_0_0
00:07:28.205   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:07:28.205   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:07:28.205   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:07:28.205   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:07:28.205   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:07:28.205   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:07:28.205   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1'
00:07:28.205  Found net devices under 0000:d9:00.1: mlx_0_1
00:07:28.205   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:07:28.205   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:07:28.205   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes
00:07:28.205   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:07:28.205   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ rdma == tcp ]]
00:07:28.205   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@447 -- # [[ rdma == rdma ]]
00:07:28.205   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # rdma_device_init
00:07:28.205   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@529 -- # load_ib_rdma_modules
00:07:28.205    13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # uname
00:07:28.205   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']'
00:07:28.205   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@66 -- # modprobe ib_cm
00:07:28.205   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@67 -- # modprobe ib_core
00:07:28.205   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@68 -- # modprobe ib_umad
00:07:28.205   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@69 -- # modprobe ib_uverbs
00:07:28.205   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@70 -- # modprobe iw_cm
00:07:28.205   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@71 -- # modprobe rdma_cm
00:07:28.205   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@72 -- # modprobe rdma_ucm
00:07:28.205   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@530 -- # allocate_nic_ips
00:07:28.205   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR ))
00:07:28.205    13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # get_rdma_if_list
00:07:28.205    13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:07:28.205    13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:07:28.205     13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:07:28.205     13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:07:28.205    13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:07:28.205    13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:07:28.205    13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:07:28.205    13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:07:28.205    13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_0
00:07:28.205    13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2
00:07:28.205    13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:07:28.205    13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:07:28.205    13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:07:28.205    13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:07:28.205    13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:07:28.205    13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_1
00:07:28.205    13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2
00:07:28.205   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:07:28.205    13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0
00:07:28.205    13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:07:28.205    13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:07:28.205    13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}'
00:07:28.205    13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1
00:07:28.205   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # ip=192.168.100.8
00:07:28.205   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]]
00:07:28.205   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_0
00:07:28.205  6: mlx_0_0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:07:28.205      link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff
00:07:28.205      altname enp217s0f0np0
00:07:28.205      altname ens818f0np0
00:07:28.205      inet 192.168.100.8/24 scope global mlx_0_0
00:07:28.205         valid_lft forever preferred_lft forever
00:07:28.205   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:07:28.205    13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1
00:07:28.205    13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:07:28.205    13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:07:28.205    13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}'
00:07:28.205    13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1
00:07:28.205   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # ip=192.168.100.9
00:07:28.205   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]]
00:07:28.205   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_1
00:07:28.205  7: mlx_0_1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:07:28.205      link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff
00:07:28.205      altname enp217s0f1np1
00:07:28.205      altname ens818f1np1
00:07:28.205      inet 192.168.100.9/24 scope global mlx_0_1
00:07:28.205         valid_lft forever preferred_lft forever
00:07:28.205   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0
00:07:28.205   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:07:28.205   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma'
00:07:28.205   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]]
00:07:28.205    13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # get_available_rdma_ips
00:07:28.205     13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # get_rdma_if_list
00:07:28.205     13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:07:28.205     13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:07:28.205      13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:07:28.205      13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:07:28.465     13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:07:28.465     13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:07:28.465     13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:07:28.465     13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:07:28.465     13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_0
00:07:28.465     13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2
00:07:28.465     13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:07:28.465     13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:07:28.465     13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:07:28.465     13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:07:28.465     13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:07:28.465     13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_1
00:07:28.465     13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2
00:07:28.465    13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:07:28.465    13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0
00:07:28.465    13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:07:28.465    13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:07:28.465    13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}'
00:07:28.465    13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1
00:07:28.465    13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:07:28.465    13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1
00:07:28.465    13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:07:28.465    13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:07:28.465    13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}'
00:07:28.465    13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1
00:07:28.465   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8
00:07:28.465  192.168.100.9'
00:07:28.465    13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # echo '192.168.100.8
00:07:28.465  192.168.100.9'
00:07:28.465    13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # head -n 1
00:07:28.465   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8
00:07:28.465    13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # echo '192.168.100.8
00:07:28.465  192.168.100.9'
00:07:28.466    13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # tail -n +2
00:07:28.466    13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # head -n 1
00:07:28.466   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9
00:07:28.466   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']'
00:07:28.466   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024'
00:07:28.466   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' rdma == tcp ']'
00:07:28.466   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' rdma == rdma ']'
00:07:28.466   13:33:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-rdma
00:07:28.466   13:33:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE
00:07:28.466   13:33:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:07:28.466   13:33:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable
00:07:28.466   13:33:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x
00:07:28.466   13:33:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3150377
00:07:28.466   13:33:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE
00:07:28.466   13:33:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3150377
00:07:28.466   13:33:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 3150377 ']'
00:07:28.466   13:33:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:28.466   13:33:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:28.466   13:33:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:28.466  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:28.466   13:33:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:28.466   13:33:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x
00:07:28.466  [2024-12-14 13:33:28.126324] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:07:28.466  [2024-12-14 13:33:28.126421] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:07:28.725  [2024-12-14 13:33:28.262241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:07:28.725  [2024-12-14 13:33:28.363922] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:07:28.725  [2024-12-14 13:33:28.363982] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:07:28.725  [2024-12-14 13:33:28.363999] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:07:28.725  [2024-12-14 13:33:28.364015] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:07:28.725  [2024-12-14 13:33:28.364044] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:07:28.725  [2024-12-14 13:33:28.366534] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:07:28.725  [2024-12-14 13:33:28.366609] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:07:28.725  [2024-12-14 13:33:28.366612] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:07:29.294   13:33:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:29.294   13:33:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0
00:07:29.294   13:33:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:07:29.294   13:33:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable
00:07:29.294   13:33:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x
00:07:29.294   13:33:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:07:29.294   13:33:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000
00:07:29.294   13:33:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192
00:07:29.553  [2024-12-14 13:33:29.160267] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028540/0x7f2d665a4940) succeed.
00:07:29.553  [2024-12-14 13:33:29.169748] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000286c0/0x7f2d66560940) succeed.
00:07:29.812   13:33:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10
00:07:30.071   13:33:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420
00:07:30.071  [2024-12-14 13:33:29.753774] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 ***
00:07:30.071   13:33:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420
00:07:30.330   13:33:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0
00:07:30.589  Malloc0
00:07:30.589   13:33:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000
00:07:30.848  Delay0
00:07:30.848   13:33:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:07:31.107   13:33:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512
00:07:31.107  NULL1
00:07:31.107   13:33:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1
00:07:31.366   13:33:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000
00:07:31.366   13:33:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3150940
00:07:31.366   13:33:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3150940
00:07:31.366   13:33:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:07:31.625   13:33:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:07:31.885   13:33:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001
00:07:31.885   13:33:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001
00:07:31.885  true
00:07:31.885   13:33:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3150940
00:07:31.885   13:33:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:07:32.144   13:33:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:07:32.404   13:33:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002
00:07:32.404   13:33:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002
00:07:32.663  true
00:07:32.663   13:33:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3150940
00:07:32.663   13:33:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:07:32.663   13:33:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:07:32.922   13:33:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003
00:07:32.922   13:33:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003
00:07:33.181  true
00:07:33.181   13:33:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3150940
00:07:33.181   13:33:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:07:33.440   13:33:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:07:33.440   13:33:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004
00:07:33.440   13:33:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004
00:07:33.699  true
00:07:33.699   13:33:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3150940
00:07:33.699   13:33:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:07:33.958   13:33:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:07:34.217   13:33:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005
00:07:34.217   13:33:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005
00:07:34.217  true
00:07:34.217   13:33:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3150940
00:07:34.217   13:33:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:07:34.476   13:33:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:07:34.736   13:33:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006
00:07:34.736   13:33:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006
00:07:34.736  true
00:07:34.736   13:33:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3150940
00:07:34.736   13:33:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:07:34.995   13:33:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:07:35.254   13:33:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007
00:07:35.254   13:33:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007
00:07:35.513  true
00:07:35.513   13:33:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3150940
00:07:35.513   13:33:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:07:35.513   13:33:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:07:35.771   13:33:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008
00:07:35.771   13:33:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008
00:07:36.030  true
00:07:36.030   13:33:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3150940
00:07:36.030   13:33:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:07:36.343   13:33:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:07:36.343   13:33:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009
00:07:36.343   13:33:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009
00:07:36.602  true
00:07:36.602   13:33:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3150940
00:07:36.602   13:33:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:07:36.861   13:33:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:07:36.861   13:33:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010
00:07:36.861   13:33:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010
00:07:37.120  true
00:07:37.120   13:33:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3150940
00:07:37.120   13:33:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:07:37.379   13:33:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:07:37.638   13:33:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011
00:07:37.638   13:33:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011
00:07:37.638  true
00:07:37.638   13:33:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3150940
00:07:37.638   13:33:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:07:37.897   13:33:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:07:38.156   13:33:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012
00:07:38.156   13:33:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012
00:07:38.156  true
00:07:38.415   13:33:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3150940
00:07:38.415   13:33:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:07:38.415   13:33:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:07:38.674   13:33:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013
00:07:38.674   13:33:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013
00:07:38.933  true
00:07:38.933   13:33:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3150940
00:07:38.933   13:33:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:07:38.933   13:33:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:07:39.192   13:33:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014
00:07:39.192   13:33:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014
00:07:39.450  true
00:07:39.450   13:33:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3150940
00:07:39.450   13:33:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:07:39.709   13:33:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:07:39.968   13:33:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015
00:07:39.968   13:33:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015
00:07:39.968  true
00:07:39.968   13:33:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3150940
00:07:39.968   13:33:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:07:40.226   13:33:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:07:40.485   13:33:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016
00:07:40.485   13:33:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016
00:07:40.485  true
00:07:40.743   13:33:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3150940
00:07:40.743   13:33:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:07:40.743   13:33:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:07:41.002   13:33:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017
00:07:41.002   13:33:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017
00:07:41.262  true
00:07:41.262   13:33:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3150940
00:07:41.262   13:33:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:07:41.520   13:33:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:07:41.520   13:33:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018
00:07:41.520   13:33:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018
00:07:41.779  true
00:07:41.779   13:33:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3150940
00:07:41.779   13:33:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:07:42.038   13:33:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:07:42.297   13:33:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019
00:07:42.297   13:33:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019
00:07:42.297  true
00:07:42.297   13:33:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3150940
00:07:42.297   13:33:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:07:42.556   13:33:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:07:42.815   13:33:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020
00:07:42.815   13:33:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020
00:07:43.073  true
00:07:43.073   13:33:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3150940
00:07:43.073   13:33:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:07:43.073   13:33:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:07:43.332   13:33:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021
00:07:43.332   13:33:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021
00:07:43.591  true
00:07:43.591   13:33:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3150940
00:07:43.591   13:33:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:07:43.850   13:33:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:07:43.850   13:33:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022
00:07:43.850   13:33:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022
00:07:44.109  true
00:07:44.109   13:33:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3150940
00:07:44.109   13:33:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:07:44.368   13:33:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:07:44.627   13:33:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023
00:07:44.627   13:33:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023
00:07:44.627  true
00:07:44.887   13:33:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3150940
00:07:44.887   13:33:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:07:44.887   13:33:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:07:45.146   13:33:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024
00:07:45.146   13:33:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024
00:07:45.405  true
00:07:45.405   13:33:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3150940
00:07:45.405   13:33:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:07:45.405   13:33:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:07:45.664   13:33:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025
00:07:45.664   13:33:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025
00:07:45.923  true
00:07:45.923   13:33:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3150940
00:07:45.923   13:33:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:07:46.182   13:33:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:07:46.441   13:33:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026
00:07:46.441   13:33:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026
00:07:46.441  true
00:07:46.441   13:33:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3150940
00:07:46.441   13:33:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:07:46.700   13:33:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:07:46.959   13:33:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027
00:07:46.959   13:33:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027
00:07:47.218  true
00:07:47.218   13:33:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3150940
00:07:47.218   13:33:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:07:47.218   13:33:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:07:47.477   13:33:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028
00:07:47.477   13:33:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028
00:07:47.736  true
00:07:47.736   13:33:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3150940
00:07:47.736   13:33:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:07:47.995   13:33:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:07:47.995   13:33:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029
00:07:47.996   13:33:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029
00:07:48.255  true
00:07:48.255   13:33:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3150940
00:07:48.255   13:33:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:07:48.514   13:33:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:07:48.773   13:33:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030
00:07:48.773   13:33:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030
00:07:48.773  true
00:07:48.773   13:33:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3150940
00:07:48.773   13:33:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:07:49.032   13:33:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:07:49.290   13:33:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031
00:07:49.290   13:33:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031
00:07:49.549  true
00:07:49.549   13:33:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3150940
00:07:49.549   13:33:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:07:49.549   13:33:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:07:49.808   13:33:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032
00:07:49.808   13:33:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032
00:07:50.067  true
00:07:50.067   13:33:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3150940
00:07:50.067   13:33:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:07:50.327   13:33:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:07:50.585   13:33:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033
00:07:50.585   13:33:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033
00:07:50.585  true
00:07:50.585   13:33:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3150940
00:07:50.585   13:33:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:07:50.843   13:33:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:07:51.101   13:33:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034
00:07:51.101   13:33:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034
00:07:51.101  true
00:07:51.360   13:33:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3150940
00:07:51.360   13:33:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:07:51.360   13:33:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:07:51.619   13:33:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035
00:07:51.619   13:33:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035
00:07:51.879  true
00:07:51.879   13:33:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3150940
00:07:51.879   13:33:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:07:52.138   13:33:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:07:52.397   13:33:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036
00:07:52.397   13:33:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036
00:07:52.397  true
00:07:52.397   13:33:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3150940
00:07:52.397   13:33:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:07:52.656   13:33:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:07:52.915   13:33:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037
00:07:52.915   13:33:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037
00:07:52.915  true
00:07:53.175   13:33:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3150940
00:07:53.175   13:33:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:07:53.175   13:33:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:07:53.434   13:33:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038
00:07:53.434   13:33:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038
00:07:53.694  true
00:07:53.694   13:33:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3150940
00:07:53.694   13:33:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:07:53.953   13:33:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:07:53.953   13:33:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039
00:07:53.953   13:33:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039
00:07:54.213  true
00:07:54.213   13:33:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3150940
00:07:54.213   13:33:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:07:54.472   13:33:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:07:54.730   13:33:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040
00:07:54.730   13:33:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040
00:07:54.730  true
00:07:54.990   13:33:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3150940
00:07:54.990   13:33:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:07:54.990   13:33:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:07:55.249   13:33:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041
00:07:55.249   13:33:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041
00:07:55.507  true
00:07:55.507   13:33:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3150940
00:07:55.507   13:33:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:07:55.765   13:33:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:07:55.765   13:33:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042
00:07:55.765   13:33:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042
00:07:56.024  true
00:07:56.024   13:33:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3150940
00:07:56.024   13:33:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:07:56.283   13:33:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:07:56.541   13:33:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043
00:07:56.541   13:33:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043
00:07:56.541  true
00:07:56.541   13:33:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3150940
00:07:56.542   13:33:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:07:56.801   13:33:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:07:57.060   13:33:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044
00:07:57.060   13:33:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044
00:07:57.319  true
00:07:57.319   13:33:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3150940
00:07:57.319   13:33:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:07:57.319   13:33:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:07:57.578   13:33:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045
00:07:57.578   13:33:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045
00:07:57.838  true
00:07:57.838   13:33:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3150940
00:07:57.838   13:33:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:07:58.097   13:33:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:07:58.356   13:33:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046
00:07:58.356   13:33:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046
00:07:58.356  true
00:07:58.356   13:33:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3150940
00:07:58.356   13:33:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:07:58.614   13:33:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:07:58.881   13:33:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047
00:07:58.881   13:33:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047
00:07:58.881  true
00:07:59.167   13:33:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3150940
00:07:59.167   13:33:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:07:59.167   13:33:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:07:59.454   13:33:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048
00:07:59.455   13:33:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048
00:07:59.455  true
00:07:59.714   13:33:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3150940
00:07:59.714   13:33:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:07:59.714   13:33:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:07:59.973   13:33:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049
00:07:59.973   13:33:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049
00:08:00.231  true
00:08:00.231   13:33:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3150940
00:08:00.231   13:33:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:08:00.490   13:34:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:08:00.490   13:34:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050
00:08:00.490   13:34:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050
00:08:00.749  true
00:08:00.749   13:34:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3150940
00:08:00.749   13:34:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:08:01.008   13:34:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:08:01.267   13:34:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051
00:08:01.267   13:34:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051
00:08:01.267  true
00:08:01.267   13:34:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3150940
00:08:01.267   13:34:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:08:01.526   13:34:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:08:01.784   13:34:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052
00:08:01.784   13:34:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052
00:08:02.043  true
00:08:02.043   13:34:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3150940
00:08:02.043   13:34:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:08:02.302   13:34:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:08:02.302   13:34:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053
00:08:02.302   13:34:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053
00:08:02.561  true
00:08:02.561   13:34:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3150940
00:08:02.561   13:34:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:08:02.561  Initializing NVMe Controllers
00:08:02.561  Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1
00:08:02.561  Controller IO queue size 128, less than required.
00:08:02.561  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:08:02.561  Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0
00:08:02.561  Initialization complete. Launching workers.
00:08:02.561  ========================================================
00:08:02.561                                                                                                                     Latency(us)
00:08:02.561  Device Information                                                             :       IOPS      MiB/s    Average        min        max
00:08:02.561  RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core  0:   35182.90      17.18    3637.98    2050.96    5903.78
00:08:02.561  ========================================================
00:08:02.561  Total                                                                          :   35182.90      17.18    3637.98    2050.96    5903.78
00:08:02.561  
00:08:02.819   13:34:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:08:03.078   13:34:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054
00:08:03.078   13:34:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054
00:08:03.078  true
00:08:03.078   13:34:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3150940
00:08:03.078  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3150940) - No such process
00:08:03.078   13:34:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3150940
00:08:03.078   13:34:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:08:03.337   13:34:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:08:03.596   13:34:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8
00:08:03.596   13:34:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=()
00:08:03.596   13:34:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 ))
00:08:03.596   13:34:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:08:03.596   13:34:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096
00:08:03.855  null0
00:08:03.855   13:34:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i ))
00:08:03.855   13:34:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:08:03.855   13:34:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096
00:08:03.855  null1
00:08:03.855   13:34:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i ))
00:08:03.855   13:34:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:08:03.855   13:34:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096
00:08:04.113  null2
00:08:04.113   13:34:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i ))
00:08:04.113   13:34:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:08:04.113   13:34:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096
00:08:04.372  null3
00:08:04.372   13:34:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i ))
00:08:04.372   13:34:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:08:04.372   13:34:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096
00:08:04.631  null4
00:08:04.631   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i ))
00:08:04.631   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:08:04.631   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096
00:08:04.631  null5
00:08:04.631   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i ))
00:08:04.631   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:08:04.631   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096
00:08:04.890  null6
00:08:04.890   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i ))
00:08:04.890   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:08:04.890   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096
00:08:05.150  null7
00:08:05.150   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i ))
00:08:05.150   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:08:05.150   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 ))
00:08:05.150   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:08:05.150   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!)
00:08:05.150   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i ))
00:08:05.150   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:08:05.150   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0
00:08:05.150   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0
00:08:05.150   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 ))
00:08:05.150   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:05.150   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!)
00:08:05.150   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:08:05.150   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i ))
00:08:05.150   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:08:05.150   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1
00:08:05.150   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1
00:08:05.150   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 ))
00:08:05.150   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:05.150   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!)
00:08:05.150   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2
00:08:05.150   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:08:05.150   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2
00:08:05.150   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i ))
00:08:05.150   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 ))
00:08:05.150   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:08:05.150   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:05.150   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:08:05.150   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!)
00:08:05.150   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3
00:08:05.150   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i ))
00:08:05.150   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:08:05.150   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3
00:08:05.150   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 ))
00:08:05.150   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:05.150   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:08:05.150   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!)
00:08:05.150   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4
00:08:05.150   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i ))
00:08:05.150   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:08:05.150   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4
00:08:05.150   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 ))
00:08:05.150   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:05.150   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:08:05.150   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!)
00:08:05.150   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5
00:08:05.150   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i ))
00:08:05.150   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:08:05.150   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5
00:08:05.150   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 ))
00:08:05.150   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:05.150   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:08:05.150   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!)
00:08:05.150   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i ))
00:08:05.150   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:08:05.150   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6
00:08:05.150   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6
00:08:05.150   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 ))
00:08:05.150   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!)
00:08:05.150   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:05.150   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i ))
00:08:05.150   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:08:05.150   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:08:05.150   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7
00:08:05.150   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3156977 3156978 3156980 3156982 3156984 3156986 3156988 3156990
00:08:05.150   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7
00:08:05.151   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 ))
00:08:05.151   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:05.151   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:08:05.410   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:08:05.410   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:08:05.410   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:08:05.410   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:08:05.410   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:08:05.410   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:08:05.410   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:08:05.410   13:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:08:05.669   13:34:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:05.669   13:34:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:05.669   13:34:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:08:05.669   13:34:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:05.669   13:34:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:05.669   13:34:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:08:05.669   13:34:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:05.669   13:34:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:05.669   13:34:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:08:05.669   13:34:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:05.669   13:34:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:05.669   13:34:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:05.669   13:34:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:05.669   13:34:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:08:05.669   13:34:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:08:05.669   13:34:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:05.669   13:34:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:05.669   13:34:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:08:05.669   13:34:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:05.669   13:34:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:05.669   13:34:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:08:05.669   13:34:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:05.669   13:34:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:05.669   13:34:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:08:05.669   13:34:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:08:05.669   13:34:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:08:05.669   13:34:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:08:05.929   13:34:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:08:05.929   13:34:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:08:05.929   13:34:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:08:05.929   13:34:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:08:05.929   13:34:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:08:05.929   13:34:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:05.929   13:34:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:05.929   13:34:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:08:05.929   13:34:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:05.929   13:34:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:05.929   13:34:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:08:05.929   13:34:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:05.929   13:34:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:05.929   13:34:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:08:05.929   13:34:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:05.929   13:34:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:05.929   13:34:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:08:05.929   13:34:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:05.929   13:34:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:05.929   13:34:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:08:05.929   13:34:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:05.929   13:34:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:05.929   13:34:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:08:05.929   13:34:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:05.929   13:34:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:05.929   13:34:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:08:05.929   13:34:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:05.929   13:34:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:05.929   13:34:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:08:06.189   13:34:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:08:06.189   13:34:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:08:06.189   13:34:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:08:06.189   13:34:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:08:06.189   13:34:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:08:06.189   13:34:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:08:06.189   13:34:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:08:06.189   13:34:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:08:06.448   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:06.448   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:06.448   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:08:06.448   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:06.448   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:06.448   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:08:06.448   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:06.448   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:06.448   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:06.448   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:08:06.448   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:06.448   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:08:06.448   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:06.448   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:06.449   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:08:06.449   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:06.449   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:06.449   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:08:06.449   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:06.449   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:06.449   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:08:06.449   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:06.449   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:06.449   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:08:06.708   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:08:06.709   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:08:06.709   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:08:06.709   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:08:06.709   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:08:06.709   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:08:06.709   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:08:06.709   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:08:06.709   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:06.709   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:06.709   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:08:06.709   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:06.709   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:06.709   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:08:06.709   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:06.709   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:06.709   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:08:06.709   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:06.709   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:06.709   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:08:06.709   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:06.709   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:06.709   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:08:06.709   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:06.709   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:06.709   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:08:06.968   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:06.968   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:06.968   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:08:06.968   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:06.968   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:06.968   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:08:06.968   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:08:06.968   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:08:06.968   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:08:06.968   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:08:06.968   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:08:06.968   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:08:06.969   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:08:06.969   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:08:07.228   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:07.228   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:07.228   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:08:07.228   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:07.228   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:07.228   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:08:07.228   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:07.228   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:07.228   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:08:07.228   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:07.228   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:07.228   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:08:07.228   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:07.228   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:07.228   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:08:07.228   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:07.228   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:07.228   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:08:07.228   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:07.228   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:07.228   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:08:07.228   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:07.228   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:07.228   13:34:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:08:07.488   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:08:07.488   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:08:07.488   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:08:07.488   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:08:07.488   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:08:07.488   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:08:07.488   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:08:07.488   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:08:07.747   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:07.747   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:07.747   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:08:07.747   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:07.747   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:07.747   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:08:07.747   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:07.747   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:07.747   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:07.747   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:08:07.747   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:07.747   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:08:07.747   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:07.747   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:07.747   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:08:07.747   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:07.747   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:07.747   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:08:07.747   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:07.747   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:07.747   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:08:07.747   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:07.747   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:07.747   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:08:07.747   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:08:07.747   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:08:07.747   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:08:07.747   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:08:07.747   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:08:07.747   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:08:07.747   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:08:07.747   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:08:08.007   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:08.007   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:08.007   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:08:08.007   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:08.007   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:08.007   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:08:08.007   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:08.007   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:08.007   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:08:08.007   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:08.007   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:08.007   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:08:08.007   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:08.007   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:08.007   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:08:08.007   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:08.007   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:08.007   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:08:08.007   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:08.007   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:08.007   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:08.007   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:08:08.007   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:08.007   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:08:08.267   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:08:08.267   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:08:08.267   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:08:08.267   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:08:08.267   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:08:08.267   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:08:08.267   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:08:08.267   13:34:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:08:08.526   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:08.526   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:08.526   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:08:08.526   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:08.526   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:08.526   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:08:08.526   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:08.526   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:08.526   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:08:08.526   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:08.526   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:08.526   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:08:08.526   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:08.526   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:08.526   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:08:08.526   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:08.526   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:08.526   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:08.526   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:08:08.526   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:08.526   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:08:08.526   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:08.526   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:08.526   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:08:08.526   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:08:08.786   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:08:08.786   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:08:08.786   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:08:08.786   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:08:08.786   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:08:08.786   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:08:08.786   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:08:08.786   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:08.786   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:08.786   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:08:08.786   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:08.786   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:08.786   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:08:08.786   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:08.786   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:08.786   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:08:08.786   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:08.786   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:08.786   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:08.786   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:08.786   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:08:08.786   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:08:08.786   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:08.786   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:08.786   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:08.786   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:08:08.786   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:08.786   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:08:08.786   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:08.786   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:08.786   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:08:09.045   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:08:09.045   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:08:09.045   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:08:09.045   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:08:09.045   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:08:09.045   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:08:09.045   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:08:09.045   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:08:09.305   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:09.305   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:09.305   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:09.305   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:09.305   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:09.305   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:09.305   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:09.305   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:09.305   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:09.305   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:09.305   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:09.305   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:09.305   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:09.305   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:09.305   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:09.305   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:09.305   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT
00:08:09.305   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini
00:08:09.305   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup
00:08:09.305   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync
00:08:09.305   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' rdma == tcp ']'
00:08:09.305   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' rdma == rdma ']'
00:08:09.305   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e
00:08:09.305   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20}
00:08:09.305   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma
00:08:09.305  rmmod nvme_rdma
00:08:09.305  rmmod nvme_fabrics
00:08:09.305   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:08:09.305   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e
00:08:09.305   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0
00:08:09.305   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3150377 ']'
00:08:09.305   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3150377
00:08:09.305   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 3150377 ']'
00:08:09.305   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 3150377
00:08:09.305    13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname
00:08:09.305   13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:08:09.305    13:34:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3150377
00:08:09.305   13:34:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:08:09.305   13:34:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:08:09.305   13:34:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3150377'
00:08:09.305  killing process with pid 3150377
00:08:09.305   13:34:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 3150377
00:08:09.305   13:34:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 3150377
00:08:11.211   13:34:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:08:11.211   13:34:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]]
00:08:11.211  
00:08:11.211  real	0m49.618s
00:08:11.211  user	3m33.172s
00:08:11.211  sys	0m16.775s
00:08:11.211   13:34:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:11.211   13:34:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x
00:08:11.211  ************************************
00:08:11.211  END TEST nvmf_ns_hotplug_stress
00:08:11.211  ************************************
00:08:11.211   13:34:10 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma
00:08:11.211   13:34:10 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:08:11.211   13:34:10 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:11.211   13:34:10 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x
00:08:11.211  ************************************
00:08:11.211  START TEST nvmf_delete_subsystem
00:08:11.211  ************************************
00:08:11.211   13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma
00:08:11.211  * Looking for test storage...
00:08:11.211  * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target
00:08:11.211    13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:08:11.211     13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version
00:08:11.211     13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:08:11.211    13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:08:11.211    13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:08:11.211    13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l
00:08:11.211    13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l
00:08:11.211    13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-:
00:08:11.211    13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1
00:08:11.211    13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-:
00:08:11.211    13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2
00:08:11.211    13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<'
00:08:11.211    13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2
00:08:11.211    13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1
00:08:11.211    13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:08:11.211    13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in
00:08:11.211    13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1
00:08:11.211    13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 ))
00:08:11.211    13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:08:11.211     13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1
00:08:11.211     13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1
00:08:11.211     13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:11.211     13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1
00:08:11.211    13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1
00:08:11.211     13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2
00:08:11.211     13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2
00:08:11.211     13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:11.211     13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2
00:08:11.211    13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2
00:08:11.211    13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:08:11.211    13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:08:11.211    13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0
00:08:11.211    13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:08:11.211    13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:08:11.211  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:11.211  		--rc genhtml_branch_coverage=1
00:08:11.211  		--rc genhtml_function_coverage=1
00:08:11.211  		--rc genhtml_legend=1
00:08:11.211  		--rc geninfo_all_blocks=1
00:08:11.211  		--rc geninfo_unexecuted_blocks=1
00:08:11.211  		
00:08:11.211  		'
00:08:11.211    13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:08:11.211  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:11.211  		--rc genhtml_branch_coverage=1
00:08:11.211  		--rc genhtml_function_coverage=1
00:08:11.211  		--rc genhtml_legend=1
00:08:11.211  		--rc geninfo_all_blocks=1
00:08:11.211  		--rc geninfo_unexecuted_blocks=1
00:08:11.211  		
00:08:11.211  		'
00:08:11.211    13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:08:11.211  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:11.211  		--rc genhtml_branch_coverage=1
00:08:11.211  		--rc genhtml_function_coverage=1
00:08:11.211  		--rc genhtml_legend=1
00:08:11.211  		--rc geninfo_all_blocks=1
00:08:11.211  		--rc geninfo_unexecuted_blocks=1
00:08:11.211  		
00:08:11.211  		'
00:08:11.211    13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:08:11.211  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:11.211  		--rc genhtml_branch_coverage=1
00:08:11.211  		--rc genhtml_function_coverage=1
00:08:11.211  		--rc genhtml_legend=1
00:08:11.211  		--rc geninfo_all_blocks=1
00:08:11.211  		--rc geninfo_unexecuted_blocks=1
00:08:11.211  		
00:08:11.211  		'
00:08:11.211   13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh
00:08:11.211     13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s
00:08:11.211    13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:08:11.211    13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:08:11.211    13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:08:11.211    13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:08:11.211    13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:08:11.211    13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:08:11.211    13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:08:11.211    13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:08:11.211    13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:08:11.211     13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:08:11.212    13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:08:11.212    13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e
00:08:11.212    13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:08:11.212    13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:08:11.212    13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:08:11.212    13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:08:11.212    13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh
00:08:11.212     13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob
00:08:11.212     13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:08:11.212     13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:08:11.212     13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:08:11.471      13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:08:11.471      13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:08:11.471      13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:08:11.471      13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH
00:08:11.471      13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:08:11.471    13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0
00:08:11.471    13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:08:11.471    13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:08:11.471    13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:08:11.471    13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:08:11.471    13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:08:11.471    13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:08:11.471  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:08:11.471    13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:08:11.471    13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:08:11.471    13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0
00:08:11.471   13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit
00:08:11.471   13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z rdma ']'
00:08:11.471   13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:08:11.471   13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs
00:08:11.471   13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no
00:08:11.471   13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns
00:08:11.471   13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:08:11.471   13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:08:11.471    13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:08:11.471   13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:08:11.471   13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:08:11.471   13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable
00:08:11.471   13:34:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:08:18.040   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:08:18.040   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=()
00:08:18.040   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs
00:08:18.040   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=()
00:08:18.040   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:08:18.040   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=()
00:08:18.040   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers
00:08:18.040   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=()
00:08:18.040   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs
00:08:18.040   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=()
00:08:18.040   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810
00:08:18.040   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=()
00:08:18.040   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722
00:08:18.040   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=()
00:08:18.040   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx
00:08:18.040   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:08:18.040   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:08:18.040   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:08:18.040   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:08:18.040   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:08:18.040   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:08:18.040   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:08:18.040   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:08:18.040   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:08:18.040   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:08:18.040   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:08:18.040   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:08:18.040   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:08:18.040   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ rdma == rdma ]]
00:08:18.040   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}")
00:08:18.040   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}")
00:08:18.040   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]]
00:08:18.040   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}")
00:08:18.040   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:08:18.040   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:08:18.040   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)'
00:08:18.040  Found 0000:d9:00.0 (0x15b3 - 0x1015)
00:08:18.040   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:08:18.040   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:08:18.040   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:08:18.040   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:08:18.041   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:08:18.041   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:08:18.041   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:08:18.041   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)'
00:08:18.041  Found 0000:d9:00.1 (0x15b3 - 0x1015)
00:08:18.041   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:08:18.041   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:08:18.041   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:08:18.041   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:08:18.041   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:08:18.041   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:08:18.041   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:08:18.041   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]]
00:08:18.041   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:08:18.041   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:08:18.041   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:08:18.041   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:08:18.041   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:08:18.041   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0'
00:08:18.041  Found net devices under 0000:d9:00.0: mlx_0_0
00:08:18.041   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:08:18.041   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:08:18.041   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:08:18.041   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:08:18.041   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:08:18.041   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:08:18.041   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1'
00:08:18.041  Found net devices under 0000:d9:00.1: mlx_0_1
00:08:18.041   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:08:18.041   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:08:18.041   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes
00:08:18.041   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:08:18.041   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ rdma == tcp ]]
00:08:18.041   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@447 -- # [[ rdma == rdma ]]
00:08:18.041   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # rdma_device_init
00:08:18.041   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@529 -- # load_ib_rdma_modules
00:08:18.041    13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # uname
00:08:18.041   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']'
00:08:18.041   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@66 -- # modprobe ib_cm
00:08:18.041   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@67 -- # modprobe ib_core
00:08:18.041   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@68 -- # modprobe ib_umad
00:08:18.041   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@69 -- # modprobe ib_uverbs
00:08:18.041   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@70 -- # modprobe iw_cm
00:08:18.041   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@71 -- # modprobe rdma_cm
00:08:18.041   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@72 -- # modprobe rdma_ucm
00:08:18.041   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@530 -- # allocate_nic_ips
00:08:18.041   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR ))
00:08:18.041    13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # get_rdma_if_list
00:08:18.041    13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:08:18.041    13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:08:18.041     13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:08:18.041     13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:08:18.041    13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:08:18.041    13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:08:18.041    13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:08:18.041    13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:08:18.041    13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_0
00:08:18.041    13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2
00:08:18.041    13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:08:18.041    13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:08:18.041    13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:08:18.041    13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:08:18.041    13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:08:18.041    13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_1
00:08:18.041    13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2
00:08:18.041   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:08:18.041    13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0
00:08:18.041    13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:08:18.041    13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:08:18.041    13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}'
00:08:18.041    13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1
00:08:18.041   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # ip=192.168.100.8
00:08:18.041   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]]
00:08:18.041   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_0
00:08:18.041  6: mlx_0_0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:08:18.041      link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff
00:08:18.041      altname enp217s0f0np0
00:08:18.041      altname ens818f0np0
00:08:18.041      inet 192.168.100.8/24 scope global mlx_0_0
00:08:18.041         valid_lft forever preferred_lft forever
00:08:18.041   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:08:18.041    13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1
00:08:18.041    13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:08:18.041    13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:08:18.041    13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}'
00:08:18.041    13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1
00:08:18.041   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # ip=192.168.100.9
00:08:18.041   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]]
00:08:18.041   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_1
00:08:18.041  7: mlx_0_1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:08:18.041      link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff
00:08:18.041      altname enp217s0f1np1
00:08:18.041      altname ens818f1np1
00:08:18.041      inet 192.168.100.9/24 scope global mlx_0_1
00:08:18.041         valid_lft forever preferred_lft forever
00:08:18.041   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0
00:08:18.041   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:08:18.041   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma'
00:08:18.041   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]]
00:08:18.041    13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # get_available_rdma_ips
00:08:18.041     13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # get_rdma_if_list
00:08:18.041     13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:08:18.041     13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:08:18.041      13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:08:18.041      13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:08:18.041     13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:08:18.041     13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:08:18.041     13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:08:18.041     13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:08:18.041     13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_0
00:08:18.041     13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2
00:08:18.041     13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:08:18.042     13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:08:18.042     13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:08:18.042     13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:08:18.042     13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:08:18.042     13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_1
00:08:18.042     13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2
00:08:18.042    13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:08:18.042    13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0
00:08:18.042    13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:08:18.042    13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:08:18.042    13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1
00:08:18.042    13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}'
00:08:18.042    13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:08:18.042    13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1
00:08:18.042    13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:08:18.042    13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:08:18.042    13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1
00:08:18.042    13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}'
00:08:18.042   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8
00:08:18.042  192.168.100.9'
00:08:18.042    13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # echo '192.168.100.8
00:08:18.042  192.168.100.9'
00:08:18.042    13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # head -n 1
00:08:18.042   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8
00:08:18.042    13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # echo '192.168.100.8
00:08:18.042  192.168.100.9'
00:08:18.042    13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # head -n 1
00:08:18.042    13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # tail -n +2
00:08:18.042   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9
00:08:18.042   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']'
00:08:18.042   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024'
00:08:18.042   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' rdma == tcp ']'
00:08:18.042   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' rdma == rdma ']'
00:08:18.042   13:34:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-rdma
00:08:18.042   13:34:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3
00:08:18.042   13:34:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:08:18.042   13:34:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable
00:08:18.042   13:34:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:08:18.042   13:34:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3161374
00:08:18.042   13:34:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3
00:08:18.042   13:34:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3161374
00:08:18.042   13:34:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 3161374 ']'
00:08:18.042   13:34:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:18.042   13:34:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:18.042   13:34:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:18.042  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:18.042   13:34:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:18.042   13:34:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:08:18.042  [2024-12-14 13:34:17.125036] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:08:18.042  [2024-12-14 13:34:17.125137] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:08:18.042  [2024-12-14 13:34:17.262245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:08:18.042  [2024-12-14 13:34:17.361707] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:08:18.042  [2024-12-14 13:34:17.361757] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:08:18.042  [2024-12-14 13:34:17.361770] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:08:18.042  [2024-12-14 13:34:17.361784] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:08:18.042  [2024-12-14 13:34:17.361794] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:08:18.042  [2024-12-14 13:34:17.363794] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:08:18.042  [2024-12-14 13:34:17.363799] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:08:18.301   13:34:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:18.302   13:34:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0
00:08:18.302   13:34:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:08:18.302   13:34:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable
00:08:18.302   13:34:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:08:18.302   13:34:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:08:18.302   13:34:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192
00:08:18.302   13:34:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:18.302   13:34:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:08:18.302  [2024-12-14 13:34:17.992222] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028540/0x7f87fdd92940) succeed.
00:08:18.302  [2024-12-14 13:34:18.001399] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000286c0/0x7f87fdd4e940) succeed.
00:08:18.561   13:34:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:18.561   13:34:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10
00:08:18.561   13:34:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:18.561   13:34:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:08:18.561   13:34:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:18.561   13:34:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420
00:08:18.561   13:34:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:18.561   13:34:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:08:18.561  [2024-12-14 13:34:18.162405] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 ***
00:08:18.561   13:34:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:18.561   13:34:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512
00:08:18.561   13:34:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:18.561   13:34:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:08:18.561  NULL1
00:08:18.561   13:34:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:18.561   13:34:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000
00:08:18.561   13:34:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:18.561   13:34:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:08:18.561  Delay0
00:08:18.561   13:34:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:18.561   13:34:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:08:18.561   13:34:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:18.561   13:34:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:08:18.561   13:34:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:18.561   13:34:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3161657
00:08:18.561   13:34:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2
00:08:18.561   13:34:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4
00:08:18.820  [2024-12-14 13:34:18.324429] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem.  This behavior is deprecated and will be removed in a future release.
00:08:20.726   13:34:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:08:20.726   13:34:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:20.726   13:34:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:08:21.663  NVMe io qpair process completion error
00:08:21.663  NVMe io qpair process completion error
00:08:21.922  NVMe io qpair process completion error
00:08:21.922  NVMe io qpair process completion error
00:08:21.922  NVMe io qpair process completion error
00:08:21.922  NVMe io qpair process completion error
00:08:21.922   13:34:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:21.922   13:34:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0
00:08:21.922   13:34:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3161657
00:08:21.922   13:34:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5
00:08:22.491   13:34:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 ))
00:08:22.491   13:34:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3161657
00:08:22.491   13:34:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5
00:08:22.751  Read completed with error (sct=0, sc=8)
00:08:22.751  starting I/O failed: -6
00:08:22.751  Read completed with error (sct=0, sc=8)
00:08:22.751  starting I/O failed: -6
00:08:22.751  Read completed with error (sct=0, sc=8)
00:08:22.751  starting I/O failed: -6
00:08:22.751  Read completed with error (sct=0, sc=8)
00:08:22.751  starting I/O failed: -6
00:08:22.751  Read completed with error (sct=0, sc=8)
00:08:22.751  starting I/O failed: -6
00:08:22.751  Read completed with error (sct=0, sc=8)
00:08:22.751  starting I/O failed: -6
00:08:22.751  Write completed with error (sct=0, sc=8)
00:08:22.751  starting I/O failed: -6
00:08:22.751  Write completed with error (sct=0, sc=8)
00:08:22.751  starting I/O failed: -6
00:08:22.751  Read completed with error (sct=0, sc=8)
00:08:22.751  starting I/O failed: -6
00:08:22.751  Read completed with error (sct=0, sc=8)
00:08:22.751  starting I/O failed: -6
00:08:22.751  Read completed with error (sct=0, sc=8)
00:08:22.751  starting I/O failed: -6
00:08:22.751  Write completed with error (sct=0, sc=8)
00:08:22.751  starting I/O failed: -6
00:08:22.751  Write completed with error (sct=0, sc=8)
00:08:22.751  starting I/O failed: -6
00:08:22.751  Read completed with error (sct=0, sc=8)
00:08:22.751  starting I/O failed: -6
00:08:22.751  Read completed with error (sct=0, sc=8)
00:08:22.751  starting I/O failed: -6
00:08:22.751  Write completed with error (sct=0, sc=8)
00:08:22.751  starting I/O failed: -6
00:08:22.751  Write completed with error (sct=0, sc=8)
00:08:22.751  starting I/O failed: -6
00:08:22.751  Read completed with error (sct=0, sc=8)
00:08:22.751  starting I/O failed: -6
00:08:22.751  Read completed with error (sct=0, sc=8)
00:08:22.751  starting I/O failed: -6
00:08:22.751  Read completed with error (sct=0, sc=8)
00:08:22.751  starting I/O failed: -6
00:08:22.751  Write completed with error (sct=0, sc=8)
00:08:22.751  starting I/O failed: -6
00:08:22.751  Read completed with error (sct=0, sc=8)
00:08:22.751  starting I/O failed: -6
00:08:22.751  Write completed with error (sct=0, sc=8)
00:08:22.751  starting I/O failed: -6
00:08:22.751  Read completed with error (sct=0, sc=8)
00:08:22.751  starting I/O failed: -6
00:08:22.751  Write completed with error (sct=0, sc=8)
00:08:22.751  starting I/O failed: -6
00:08:22.751  Read completed with error (sct=0, sc=8)
00:08:22.751  starting I/O failed: -6
00:08:22.751  Read completed with error (sct=0, sc=8)
00:08:22.751  starting I/O failed: -6
00:08:22.751  Read completed with error (sct=0, sc=8)
00:08:22.751  starting I/O failed: -6
00:08:22.751  Write completed with error (sct=0, sc=8)
00:08:22.751  starting I/O failed: -6
00:08:22.751  Read completed with error (sct=0, sc=8)
00:08:22.751  starting I/O failed: -6
00:08:22.751  Read completed with error (sct=0, sc=8)
00:08:22.751  starting I/O failed: -6
00:08:22.751  Read completed with error (sct=0, sc=8)
00:08:22.751  starting I/O failed: -6
00:08:22.751  Write completed with error (sct=0, sc=8)
00:08:22.751  starting I/O failed: -6
00:08:22.751  Read completed with error (sct=0, sc=8)
00:08:22.751  starting I/O failed: -6
00:08:22.751  Write completed with error (sct=0, sc=8)
00:08:22.751  starting I/O failed: -6
00:08:22.751  Read completed with error (sct=0, sc=8)
00:08:22.751  starting I/O failed: -6
00:08:22.751  Read completed with error (sct=0, sc=8)
00:08:22.751  starting I/O failed: -6
00:08:22.751  Read completed with error (sct=0, sc=8)
00:08:22.751  starting I/O failed: -6
00:08:22.751  Read completed with error (sct=0, sc=8)
00:08:22.751  starting I/O failed: -6
00:08:22.751  Read completed with error (sct=0, sc=8)
00:08:22.751  starting I/O failed: -6
00:08:22.751  Read completed with error (sct=0, sc=8)
00:08:22.751  starting I/O failed: -6
00:08:22.751  Read completed with error (sct=0, sc=8)
00:08:22.751  starting I/O failed: -6
00:08:22.751  Read completed with error (sct=0, sc=8)
00:08:22.751  starting I/O failed: -6
00:08:22.751  Write completed with error (sct=0, sc=8)
00:08:22.751  starting I/O failed: -6
00:08:22.751  Read completed with error (sct=0, sc=8)
00:08:22.751  starting I/O failed: -6
00:08:22.751  Read completed with error (sct=0, sc=8)
00:08:22.751  starting I/O failed: -6
00:08:22.751  Read completed with error (sct=0, sc=8)
00:08:22.751  starting I/O failed: -6
00:08:22.751  Write completed with error (sct=0, sc=8)
00:08:22.751  starting I/O failed: -6
00:08:22.751  Write completed with error (sct=0, sc=8)
00:08:22.751  Read completed with error (sct=0, sc=8)
00:08:22.751  Read completed with error (sct=0, sc=8)
00:08:22.751  Read completed with error (sct=0, sc=8)
00:08:22.751  Read completed with error (sct=0, sc=8)
00:08:22.751  Write completed with error (sct=0, sc=8)
00:08:22.751  Read completed with error (sct=0, sc=8)
00:08:22.751  Read completed with error (sct=0, sc=8)
00:08:22.751  Read completed with error (sct=0, sc=8)
00:08:22.751  Read completed with error (sct=0, sc=8)
00:08:22.751  Read completed with error (sct=0, sc=8)
00:08:22.751  Write completed with error (sct=0, sc=8)
00:08:22.751  Read completed with error (sct=0, sc=8)
00:08:22.751  Read completed with error (sct=0, sc=8)
00:08:22.751  Read completed with error (sct=0, sc=8)
00:08:22.751  Read completed with error (sct=0, sc=8)
00:08:22.751  Write completed with error (sct=0, sc=8)
00:08:22.751  Read completed with error (sct=0, sc=8)
00:08:22.751  Read completed with error (sct=0, sc=8)
00:08:22.751  Read completed with error (sct=0, sc=8)
00:08:22.751  Write completed with error (sct=0, sc=8)
00:08:22.751  Write completed with error (sct=0, sc=8)
00:08:22.751  Read completed with error (sct=0, sc=8)
00:08:22.751  Read completed with error (sct=0, sc=8)
00:08:22.751  Read completed with error (sct=0, sc=8)
00:08:22.751  Write completed with error (sct=0, sc=8)
00:08:22.751  Read completed with error (sct=0, sc=8)
00:08:22.751  Read completed with error (sct=0, sc=8)
00:08:22.751  Read completed with error (sct=0, sc=8)
00:08:22.751  Read completed with error (sct=0, sc=8)
00:08:22.751  Write completed with error (sct=0, sc=8)
00:08:22.751  Write completed with error (sct=0, sc=8)
00:08:22.751  Read completed with error (sct=0, sc=8)
00:08:22.751  Read completed with error (sct=0, sc=8)
00:08:22.751  Read completed with error (sct=0, sc=8)
00:08:22.752  Write completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Write completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Write completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  starting I/O failed: -6
00:08:22.752  Write completed with error (sct=0, sc=8)
00:08:22.752  starting I/O failed: -6
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  starting I/O failed: -6
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  starting I/O failed: -6
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  starting I/O failed: -6
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  starting I/O failed: -6
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  starting I/O failed: -6
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  starting I/O failed: -6
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  starting I/O failed: -6
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  starting I/O failed: -6
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  starting I/O failed: -6
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  starting I/O failed: -6
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  starting I/O failed: -6
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  starting I/O failed: -6
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  starting I/O failed: -6
00:08:22.752  Write completed with error (sct=0, sc=8)
00:08:22.752  starting I/O failed: -6
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  starting I/O failed: -6
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  starting I/O failed: -6
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  starting I/O failed: -6
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  starting I/O failed: -6
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  starting I/O failed: -6
00:08:22.752  Write completed with error (sct=0, sc=8)
00:08:22.752  starting I/O failed: -6
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  starting I/O failed: -6
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  starting I/O failed: -6
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  starting I/O failed: -6
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  starting I/O failed: -6
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  starting I/O failed: -6
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  starting I/O failed: -6
00:08:22.752  Write completed with error (sct=0, sc=8)
00:08:22.752  starting I/O failed: -6
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  starting I/O failed: -6
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  starting I/O failed: -6
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  starting I/O failed: -6
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  starting I/O failed: -6
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  starting I/O failed: -6
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  starting I/O failed: -6
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  starting I/O failed: -6
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  starting I/O failed: -6
00:08:22.752  Write completed with error (sct=0, sc=8)
00:08:22.752  starting I/O failed: -6
00:08:22.752  Write completed with error (sct=0, sc=8)
00:08:22.752  starting I/O failed: -6
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  starting I/O failed: -6
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  starting I/O failed: -6
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  starting I/O failed: -6
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  starting I/O failed: -6
00:08:22.752  Write completed with error (sct=0, sc=8)
00:08:22.752  starting I/O failed: -6
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  starting I/O failed: -6
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  starting I/O failed: -6
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  starting I/O failed: -6
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  starting I/O failed: -6
00:08:22.752  Write completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Write completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Write completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Write completed with error (sct=0, sc=8)
00:08:22.752  Write completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Write completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Write completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Write completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Write completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Write completed with error (sct=0, sc=8)
00:08:22.752  Write completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Write completed with error (sct=0, sc=8)
00:08:22.752  Write completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Write completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Write completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Write completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Write completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Write completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Write completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Write completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Write completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Write completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Write completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.752  Read completed with error (sct=0, sc=8)
00:08:22.753  Read completed with error (sct=0, sc=8)
00:08:22.753  Read completed with error (sct=0, sc=8)
00:08:22.753   13:34:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 ))
00:08:22.753   13:34:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3161657
00:08:22.753   13:34:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5
00:08:22.753  Initializing NVMe Controllers
00:08:22.753  Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1
00:08:22.753  Controller IO queue size 128, less than required.
00:08:22.753  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:08:22.753  Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2
00:08:22.753  Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3
00:08:22.753  Initialization complete. Launching workers.
00:08:22.753  ========================================================
00:08:22.753                                                                                                                     Latency(us)
00:08:22.753  Device Information                                                             :       IOPS      MiB/s    Average        min        max
00:08:22.753  RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  2:      80.46       0.04 1594103.80 1000236.74 2976331.87
00:08:22.753  RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  3:      80.46       0.04 1596096.17 1001673.97 2978641.12
00:08:22.753  ========================================================
00:08:22.753  Total                                                                          :     160.92       0.08 1595099.98 1000236.74 2978641.12
00:08:22.753  
00:08:22.753  [2024-12-14 13:34:22.463087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0
00:08:22.753  [2024-12-14 13:34:22.463166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state.
00:08:22.753  /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred
00:08:23.322   13:34:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 ))
00:08:23.322   13:34:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3161657
00:08:23.322  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3161657) - No such process
00:08:23.322   13:34:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3161657
00:08:23.322   13:34:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0
00:08:23.322   13:34:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3161657
00:08:23.322   13:34:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait
00:08:23.322   13:34:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:08:23.322    13:34:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait
00:08:23.322   13:34:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:08:23.322   13:34:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 3161657
00:08:23.322   13:34:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1
00:08:23.322   13:34:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:08:23.322   13:34:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:08:23.322   13:34:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:08:23.322   13:34:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10
00:08:23.322   13:34:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:23.322   13:34:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:08:23.322   13:34:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:23.322   13:34:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420
00:08:23.322   13:34:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:23.322   13:34:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:08:23.322  [2024-12-14 13:34:22.957035] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 ***
00:08:23.322   13:34:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:23.322   13:34:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:08:23.322   13:34:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:23.322   13:34:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:08:23.322   13:34:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:23.322   13:34:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3162474
00:08:23.322   13:34:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0
00:08:23.322   13:34:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4
00:08:23.322   13:34:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3162474
00:08:23.322   13:34:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5
00:08:23.581  [2024-12-14 13:34:23.090276] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem.  This behavior is deprecated and will be removed in a future release.
00:08:23.841   13:34:23 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 ))
00:08:23.841   13:34:23 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3162474
00:08:23.841   13:34:23 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5
00:08:24.409   13:34:23 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 ))
00:08:24.409   13:34:23 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3162474
00:08:24.409   13:34:23 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5
00:08:24.977   13:34:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 ))
00:08:24.977   13:34:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3162474
00:08:24.977   13:34:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5
00:08:25.546   13:34:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 ))
00:08:25.546   13:34:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3162474
00:08:25.546   13:34:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5
00:08:25.805   13:34:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 ))
00:08:25.805   13:34:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3162474
00:08:25.805   13:34:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5
00:08:26.372   13:34:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 ))
00:08:26.372   13:34:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3162474
00:08:26.372   13:34:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5
00:08:26.940   13:34:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 ))
00:08:26.940   13:34:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3162474
00:08:26.940   13:34:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5
00:08:27.559   13:34:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 ))
00:08:27.559   13:34:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3162474
00:08:27.559   13:34:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5
00:08:27.819   13:34:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 ))
00:08:27.819   13:34:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3162474
00:08:27.819   13:34:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5
00:08:28.387   13:34:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 ))
00:08:28.387   13:34:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3162474
00:08:28.387   13:34:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5
00:08:28.955   13:34:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 ))
00:08:28.955   13:34:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3162474
00:08:28.955   13:34:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5
00:08:29.522   13:34:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 ))
00:08:29.522   13:34:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3162474
00:08:29.522   13:34:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5
00:08:30.090   13:34:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 ))
00:08:30.090   13:34:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3162474
00:08:30.090   13:34:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5
00:08:30.349   13:34:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 ))
00:08:30.349   13:34:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3162474
00:08:30.349   13:34:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5
00:08:30.608  Initializing NVMe Controllers
00:08:30.608  Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1
00:08:30.608  Controller IO queue size 128, less than required.
00:08:30.608  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:08:30.608  Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2
00:08:30.608  Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3
00:08:30.608  Initialization complete. Launching workers.
00:08:30.608  ========================================================
00:08:30.608                                                                                                                     Latency(us)
00:08:30.608  Device Information                                                             :       IOPS      MiB/s    Average        min        max
00:08:30.608  RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  2:     128.00       0.06 1001466.72 1000060.44 1004386.61
00:08:30.608  RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  3:     128.00       0.06 1003010.10 1000613.64 1006871.82
00:08:30.608  ========================================================
00:08:30.608  Total                                                                          :     256.00       0.12 1002238.41 1000060.44 1006871.82
00:08:30.608  
00:08:30.867   13:34:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 ))
00:08:30.867   13:34:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3162474
00:08:30.867  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3162474) - No such process
00:08:30.867   13:34:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3162474
00:08:30.867   13:34:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT
00:08:30.867   13:34:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini
00:08:30.868   13:34:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup
00:08:30.868   13:34:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync
00:08:30.868   13:34:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' rdma == tcp ']'
00:08:30.868   13:34:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' rdma == rdma ']'
00:08:30.868   13:34:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e
00:08:30.868   13:34:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20}
00:08:30.868   13:34:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma
00:08:30.868  rmmod nvme_rdma
00:08:30.868  rmmod nvme_fabrics
00:08:30.868   13:34:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:08:30.868   13:34:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e
00:08:30.868   13:34:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0
00:08:30.868   13:34:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3161374 ']'
00:08:30.868   13:34:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3161374
00:08:30.868   13:34:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 3161374 ']'
00:08:30.868   13:34:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 3161374
00:08:31.127    13:34:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname
00:08:31.127   13:34:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:08:31.127    13:34:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3161374
00:08:31.127   13:34:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:08:31.127   13:34:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:08:31.127   13:34:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3161374'
00:08:31.127  killing process with pid 3161374
00:08:31.127   13:34:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 3161374
00:08:31.127   13:34:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 3161374
00:08:32.506   13:34:32 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:08:32.506   13:34:32 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]]
00:08:32.506  
00:08:32.506  real	0m21.311s
00:08:32.506  user	0m51.999s
00:08:32.506  sys	0m6.163s
00:08:32.506   13:34:32 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:32.506   13:34:32 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:08:32.506  ************************************
00:08:32.506  END TEST nvmf_delete_subsystem
00:08:32.506  ************************************
00:08:32.506   13:34:32 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma
00:08:32.506   13:34:32 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:08:32.506   13:34:32 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:32.506   13:34:32 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x
00:08:32.506  ************************************
00:08:32.506  START TEST nvmf_host_management
00:08:32.506  ************************************
00:08:32.506   13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma
00:08:32.506  * Looking for test storage...
00:08:32.506  * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target
00:08:32.506    13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:08:32.506     13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version
00:08:32.506     13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:08:32.766    13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:08:32.766    13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:08:32.766    13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l
00:08:32.766    13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l
00:08:32.766    13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-:
00:08:32.766    13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1
00:08:32.766    13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-:
00:08:32.766    13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2
00:08:32.766    13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<'
00:08:32.766    13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2
00:08:32.766    13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1
00:08:32.766    13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:08:32.766    13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in
00:08:32.766    13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1
00:08:32.766    13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 ))
00:08:32.766    13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:08:32.766     13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1
00:08:32.766     13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1
00:08:32.766     13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:32.766     13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1
00:08:32.766    13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1
00:08:32.766     13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2
00:08:32.766     13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2
00:08:32.766     13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:32.766     13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2
00:08:32.766    13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2
00:08:32.766    13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:08:32.766    13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:08:32.766    13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0
00:08:32.766    13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:08:32.766    13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:08:32.766  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:32.766  		--rc genhtml_branch_coverage=1
00:08:32.766  		--rc genhtml_function_coverage=1
00:08:32.766  		--rc genhtml_legend=1
00:08:32.766  		--rc geninfo_all_blocks=1
00:08:32.766  		--rc geninfo_unexecuted_blocks=1
00:08:32.766  		
00:08:32.766  		'
00:08:32.766    13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:08:32.766  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:32.766  		--rc genhtml_branch_coverage=1
00:08:32.766  		--rc genhtml_function_coverage=1
00:08:32.766  		--rc genhtml_legend=1
00:08:32.766  		--rc geninfo_all_blocks=1
00:08:32.766  		--rc geninfo_unexecuted_blocks=1
00:08:32.766  		
00:08:32.766  		'
00:08:32.766    13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:08:32.766  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:32.766  		--rc genhtml_branch_coverage=1
00:08:32.766  		--rc genhtml_function_coverage=1
00:08:32.766  		--rc genhtml_legend=1
00:08:32.766  		--rc geninfo_all_blocks=1
00:08:32.766  		--rc geninfo_unexecuted_blocks=1
00:08:32.766  		
00:08:32.766  		'
00:08:32.766    13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:08:32.766  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:32.766  		--rc genhtml_branch_coverage=1
00:08:32.766  		--rc genhtml_function_coverage=1
00:08:32.766  		--rc genhtml_legend=1
00:08:32.766  		--rc geninfo_all_blocks=1
00:08:32.766  		--rc geninfo_unexecuted_blocks=1
00:08:32.766  		
00:08:32.766  		'
00:08:32.766   13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh
00:08:32.766     13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s
00:08:32.766    13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:08:32.766    13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:08:32.766    13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:08:32.766    13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:08:32.766    13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:08:32.766    13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:08:32.766    13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:08:32.766    13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:08:32.766    13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:08:32.766     13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:08:32.766    13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:08:32.766    13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e
00:08:32.766    13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:08:32.766    13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:08:32.766    13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:08:32.766    13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:08:32.767    13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh
00:08:32.767     13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob
00:08:32.767     13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:08:32.767     13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:08:32.767     13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:08:32.767      13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:08:32.767      13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:08:32.767      13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:08:32.767      13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH
00:08:32.767      13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:08:32.767    13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0
00:08:32.767    13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:08:32.767    13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:08:32.767    13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:08:32.767    13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:08:32.767    13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:08:32.767    13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:08:32.767  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:08:32.767    13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:08:32.767    13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:08:32.767    13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0
00:08:32.767   13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64
00:08:32.767   13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:08:32.767   13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit
00:08:32.767   13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z rdma ']'
00:08:32.767   13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:08:32.767   13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs
00:08:32.767   13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no
00:08:32.767   13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns
00:08:32.767   13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:08:32.767   13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:08:32.767    13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:08:32.767   13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:08:32.767   13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:08:32.767   13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable
00:08:32.767   13:34:32 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:08:39.328   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:08:39.328   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=()
00:08:39.328   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs
00:08:39.328   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=()
00:08:39.328   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:08:39.328   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=()
00:08:39.328   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers
00:08:39.328   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=()
00:08:39.328   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs
00:08:39.328   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=()
00:08:39.328   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810
00:08:39.328   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=()
00:08:39.328   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722
00:08:39.328   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=()
00:08:39.328   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx
00:08:39.328   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:08:39.328   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:08:39.328   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:08:39.328   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:08:39.328   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:08:39.328   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:08:39.328   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:08:39.328   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:08:39.328   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:08:39.328   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:08:39.328   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:08:39.328   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:08:39.328   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:08:39.328   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ rdma == rdma ]]
00:08:39.328   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}")
00:08:39.328   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}")
00:08:39.328   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]]
00:08:39.328   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}")
00:08:39.328   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:08:39.328   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:08:39.328   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)'
00:08:39.328  Found 0000:d9:00.0 (0x15b3 - 0x1015)
00:08:39.328   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:08:39.328   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:08:39.328   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:08:39.328   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:08:39.328   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:08:39.329   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:08:39.329   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:08:39.329   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)'
00:08:39.329  Found 0000:d9:00.1 (0x15b3 - 0x1015)
00:08:39.329   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:08:39.329   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:08:39.329   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:08:39.329   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:08:39.329   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:08:39.329   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:08:39.329   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:08:39.329   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]]
00:08:39.329   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:08:39.329   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:08:39.329   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:08:39.329   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:08:39.329   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:08:39.329   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0'
00:08:39.329  Found net devices under 0000:d9:00.0: mlx_0_0
00:08:39.329   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:08:39.329   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:08:39.329   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:08:39.329   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:08:39.329   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:08:39.329   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:08:39.329   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1'
00:08:39.329  Found net devices under 0000:d9:00.1: mlx_0_1
00:08:39.329   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:08:39.329   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:08:39.329   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes
00:08:39.329   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:08:39.329   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ rdma == tcp ]]
00:08:39.329   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@447 -- # [[ rdma == rdma ]]
00:08:39.329   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # rdma_device_init
00:08:39.329   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@529 -- # load_ib_rdma_modules
00:08:39.329    13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@62 -- # uname
00:08:39.329   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']'
00:08:39.329   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@66 -- # modprobe ib_cm
00:08:39.329   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@67 -- # modprobe ib_core
00:08:39.329   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@68 -- # modprobe ib_umad
00:08:39.329   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@69 -- # modprobe ib_uverbs
00:08:39.329   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@70 -- # modprobe iw_cm
00:08:39.329   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@71 -- # modprobe rdma_cm
00:08:39.329   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@72 -- # modprobe rdma_ucm
00:08:39.329   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@530 -- # allocate_nic_ips
00:08:39.329   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR ))
00:08:39.329    13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # get_rdma_if_list
00:08:39.329    13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:08:39.329    13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:08:39.329     13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:08:39.329     13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:08:39.329    13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:08:39.329    13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:08:39.329    13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:08:39.329    13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:08:39.329    13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_0
00:08:39.329    13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2
00:08:39.329    13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:08:39.329    13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:08:39.329    13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:08:39.329    13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:08:39.329    13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:08:39.329    13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_1
00:08:39.329    13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2
00:08:39.329   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:08:39.329    13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0
00:08:39.329    13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:08:39.329    13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:08:39.329    13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}'
00:08:39.329    13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1
00:08:39.329   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # ip=192.168.100.8
00:08:39.329   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]]
00:08:39.329   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@85 -- # ip addr show mlx_0_0
00:08:39.329  6: mlx_0_0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:08:39.329      link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff
00:08:39.329      altname enp217s0f0np0
00:08:39.329      altname ens818f0np0
00:08:39.329      inet 192.168.100.8/24 scope global mlx_0_0
00:08:39.329         valid_lft forever preferred_lft forever
00:08:39.329   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:08:39.329    13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1
00:08:39.329    13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:08:39.329    13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:08:39.329    13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}'
00:08:39.329    13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1
00:08:39.329   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # ip=192.168.100.9
00:08:39.329   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]]
00:08:39.329   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@85 -- # ip addr show mlx_0_1
00:08:39.329  7: mlx_0_1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:08:39.329      link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff
00:08:39.329      altname enp217s0f1np1
00:08:39.329      altname ens818f1np1
00:08:39.329      inet 192.168.100.9/24 scope global mlx_0_1
00:08:39.329         valid_lft forever preferred_lft forever
00:08:39.329   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0
00:08:39.329   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:08:39.329   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma'
00:08:39.329   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]]
00:08:39.329    13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # get_available_rdma_ips
00:08:39.329     13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # get_rdma_if_list
00:08:39.329     13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:08:39.329     13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:08:39.329      13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:08:39.329      13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:08:39.329     13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:08:39.329     13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:08:39.330     13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:08:39.330     13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:08:39.330     13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_0
00:08:39.330     13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2
00:08:39.330     13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:08:39.330     13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:08:39.330     13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:08:39.330     13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:08:39.330     13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:08:39.330     13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_1
00:08:39.330     13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2
00:08:39.330    13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:08:39.330    13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0
00:08:39.330    13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:08:39.330    13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:08:39.330    13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1
00:08:39.330    13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}'
00:08:39.330    13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:08:39.330    13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1
00:08:39.330    13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:08:39.330    13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:08:39.330    13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1
00:08:39.330    13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}'
00:08:39.330   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8
00:08:39.330  192.168.100.9'
00:08:39.330    13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # echo '192.168.100.8
00:08:39.330  192.168.100.9'
00:08:39.330    13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # head -n 1
00:08:39.330   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8
00:08:39.330    13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # echo '192.168.100.8
00:08:39.330  192.168.100.9'
00:08:39.330    13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # head -n 1
00:08:39.330    13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # tail -n +2
00:08:39.330   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9
00:08:39.330   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']'
00:08:39.330   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024'
00:08:39.330   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' rdma == tcp ']'
00:08:39.330   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' rdma == rdma ']'
00:08:39.330   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-rdma
00:08:39.330   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management
00:08:39.330   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget
00:08:39.330   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E
00:08:39.330   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:08:39.330   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable
00:08:39.330   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:08:39.330   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3167254
00:08:39.330   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3167254
00:08:39.330   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E
00:08:39.330   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3167254 ']'
00:08:39.330   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:39.330   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:39.330   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:39.330  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:39.330   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:39.330   13:34:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:08:39.330  [2024-12-14 13:34:38.650268] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:08:39.330  [2024-12-14 13:34:38.650365] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:08:39.330  [2024-12-14 13:34:38.784996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:08:39.330  [2024-12-14 13:34:38.891441] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:08:39.330  [2024-12-14 13:34:38.891495] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:08:39.330  [2024-12-14 13:34:38.891514] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:08:39.330  [2024-12-14 13:34:38.891534] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:08:39.330  [2024-12-14 13:34:38.891548] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:08:39.330  [2024-12-14 13:34:38.896964] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:08:39.330  [2024-12-14 13:34:38.896991] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:08:39.330  [2024-12-14 13:34:38.897077] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:08:39.330  [2024-12-14 13:34:38.897097] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4
00:08:39.899   13:34:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:39.899   13:34:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0
00:08:39.899   13:34:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:08:39.899   13:34:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable
00:08:39.899   13:34:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:08:39.899   13:34:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:08:39.899   13:34:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192
00:08:39.899   13:34:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:39.899   13:34:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:08:39.899  [2024-12-14 13:34:39.541181] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000286c0/0x7fa6cad84940) succeed.
00:08:39.899  [2024-12-14 13:34:39.551295] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028840/0x7fa6cad3f940) succeed.
00:08:40.158   13:34:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:40.158   13:34:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem
00:08:40.158   13:34:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable
00:08:40.158   13:34:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:08:40.158   13:34:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt
00:08:40.158   13:34:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat
00:08:40.158   13:34:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd
00:08:40.158   13:34:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:40.158   13:34:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:08:40.158  Malloc0
00:08:40.418  [2024-12-14 13:34:39.915249] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 ***
00:08:40.418   13:34:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:40.418   13:34:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems
00:08:40.418   13:34:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable
00:08:40.418   13:34:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:08:40.418   13:34:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3167566
00:08:40.418   13:34:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3167566 /var/tmp/bdevperf.sock
00:08:40.418    13:34:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0
00:08:40.418   13:34:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3167566 ']'
00:08:40.418   13:34:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10
00:08:40.418   13:34:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:08:40.418    13:34:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=()
00:08:40.418   13:34:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:40.418    13:34:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config
00:08:40.418   13:34:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:08:40.418  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:08:40.418    13:34:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:08:40.418   13:34:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:40.418    13:34:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:08:40.418  {
00:08:40.418    "params": {
00:08:40.418      "name": "Nvme$subsystem",
00:08:40.418      "trtype": "$TEST_TRANSPORT",
00:08:40.418      "traddr": "$NVMF_FIRST_TARGET_IP",
00:08:40.418      "adrfam": "ipv4",
00:08:40.418      "trsvcid": "$NVMF_PORT",
00:08:40.418      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:08:40.418      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:08:40.418      "hdgst": ${hdgst:-false},
00:08:40.418      "ddgst": ${ddgst:-false}
00:08:40.418    },
00:08:40.418    "method": "bdev_nvme_attach_controller"
00:08:40.418  }
00:08:40.418  EOF
00:08:40.418  )")
00:08:40.418   13:34:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:08:40.418     13:34:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat
00:08:40.418    13:34:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq .
00:08:40.418     13:34:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=,
00:08:40.418     13:34:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:08:40.418    "params": {
00:08:40.418      "name": "Nvme0",
00:08:40.418      "trtype": "rdma",
00:08:40.418      "traddr": "192.168.100.8",
00:08:40.418      "adrfam": "ipv4",
00:08:40.418      "trsvcid": "4420",
00:08:40.418      "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:08:40.418      "hostnqn": "nqn.2016-06.io.spdk:host0",
00:08:40.418      "hdgst": false,
00:08:40.418      "ddgst": false
00:08:40.418    },
00:08:40.418    "method": "bdev_nvme_attach_controller"
00:08:40.418  }'
00:08:40.418  [2024-12-14 13:34:40.058783] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:08:40.418  [2024-12-14 13:34:40.058876] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3167566 ]
00:08:40.678  [2024-12-14 13:34:40.193672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:40.678  [2024-12-14 13:34:40.297989] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:08:41.246  Running I/O for 10 seconds...
00:08:41.246   13:34:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:41.246   13:34:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0
00:08:41.246   13:34:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init
00:08:41.246   13:34:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:41.246   13:34:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:08:41.246   13:34:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:41.246   13:34:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:08:41.246   13:34:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1
00:08:41.246   13:34:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']'
00:08:41.246   13:34:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']'
00:08:41.246   13:34:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1
00:08:41.246   13:34:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i
00:08:41.246   13:34:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 ))
00:08:41.246   13:34:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 ))
00:08:41.246    13:34:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1
00:08:41.246    13:34:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops'
00:08:41.246    13:34:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:41.246    13:34:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:08:41.246    13:34:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:41.246   13:34:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=564
00:08:41.246   13:34:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 564 -ge 100 ']'
00:08:41.246   13:34:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0
00:08:41.246   13:34:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break
00:08:41.246   13:34:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0
00:08:41.246   13:34:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0
00:08:41.246   13:34:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:41.246   13:34:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:08:41.246   13:34:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:41.246   13:34:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0
00:08:41.246   13:34:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:41.246   13:34:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:08:41.246   13:34:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:41.246   13:34:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1
00:08:42.444        640.00 IOPS,    40.00 MiB/s
[2024-12-14T12:34:42.182Z] [2024-12-14 13:34:41.966302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000acf240 len:0x10000 key:0x181800
00:08:42.444  [2024-12-14 13:34:41.966363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:42.444  [2024-12-14 13:34:41.966399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:84224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000abf180 len:0x10000 key:0x181800
00:08:42.444  [2024-12-14 13:34:41.966413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:42.444  [2024-12-14 13:34:41.966429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000aaf0c0 len:0x10000 key:0x181800
00:08:42.444  [2024-12-14 13:34:41.966443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:42.444  [2024-12-14 13:34:41.966458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a9f000 len:0x10000 key:0x181800
00:08:42.444  [2024-12-14 13:34:41.966470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:42.444  [2024-12-14 13:34:41.966485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a8ef40 len:0x10000 key:0x181800
00:08:42.444  [2024-12-14 13:34:41.966500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:42.444  [2024-12-14 13:34:41.966515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a7ee80 len:0x10000 key:0x181800
00:08:42.444  [2024-12-14 13:34:41.966527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:42.444  [2024-12-14 13:34:41.966541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a6edc0 len:0x10000 key:0x181800
00:08:42.444  [2024-12-14 13:34:41.966553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:42.444  [2024-12-14 13:34:41.966568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a5ed00 len:0x10000 key:0x181800
00:08:42.444  [2024-12-14 13:34:41.966583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:42.444  [2024-12-14 13:34:41.966598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a4ec40 len:0x10000 key:0x181800
00:08:42.444  [2024-12-14 13:34:41.966610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:42.444  [2024-12-14 13:34:41.966625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a3eb80 len:0x10000 key:0x181800
00:08:42.444  [2024-12-14 13:34:41.966637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:42.444  [2024-12-14 13:34:41.966651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a2eac0 len:0x10000 key:0x181800
00:08:42.444  [2024-12-14 13:34:41.966663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:42.444  [2024-12-14 13:34:41.966677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a1ea00 len:0x10000 key:0x181800
00:08:42.444  [2024-12-14 13:34:41.966690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:42.444  [2024-12-14 13:34:41.966704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a0e940 len:0x10000 key:0x181800
00:08:42.444  [2024-12-14 13:34:41.966716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:42.444  [2024-12-14 13:34:41.966730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000deffc0 len:0x10000 key:0x181b00
00:08:42.444  [2024-12-14 13:34:41.966742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:42.444  [2024-12-14 13:34:41.966755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000ddff00 len:0x10000 key:0x181b00
00:08:42.444  [2024-12-14 13:34:41.966767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:42.444  [2024-12-14 13:34:41.966781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000dcfe40 len:0x10000 key:0x181b00
00:08:42.444  [2024-12-14 13:34:41.966793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:42.444  [2024-12-14 13:34:41.966806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000dbfd80 len:0x10000 key:0x181b00
00:08:42.444  [2024-12-14 13:34:41.966818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:42.444  [2024-12-14 13:34:41.966832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000dafcc0 len:0x10000 key:0x181b00
00:08:42.444  [2024-12-14 13:34:41.966844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:42.444  [2024-12-14 13:34:41.966857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d9fc00 len:0x10000 key:0x181b00
00:08:42.444  [2024-12-14 13:34:41.966870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:42.444  [2024-12-14 13:34:41.966885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d8fb40 len:0x10000 key:0x181b00
00:08:42.444  [2024-12-14 13:34:41.966897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:42.444  [2024-12-14 13:34:41.966910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d7fa80 len:0x10000 key:0x181b00
00:08:42.444  [2024-12-14 13:34:41.966923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:42.444  [2024-12-14 13:34:41.966943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d6f9c0 len:0x10000 key:0x181b00
00:08:42.444  [2024-12-14 13:34:41.966955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:42.444  [2024-12-14 13:34:41.966969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d5f900 len:0x10000 key:0x181b00
00:08:42.444  [2024-12-14 13:34:41.966981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:42.444  [2024-12-14 13:34:41.966995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d4f840 len:0x10000 key:0x181b00
00:08:42.444  [2024-12-14 13:34:41.967007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:42.444  [2024-12-14 13:34:41.967021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d3f780 len:0x10000 key:0x181b00
00:08:42.444  [2024-12-14 13:34:41.967033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:42.444  [2024-12-14 13:34:41.967047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d2f6c0 len:0x10000 key:0x181b00
00:08:42.444  [2024-12-14 13:34:41.967059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:42.444  [2024-12-14 13:34:41.967072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:87424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d1f600 len:0x10000 key:0x181b00
00:08:42.444  [2024-12-14 13:34:41.967084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:42.444  [2024-12-14 13:34:41.967098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:87552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d0f540 len:0x10000 key:0x181b00
00:08:42.444  [2024-12-14 13:34:41.967109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:42.444  [2024-12-14 13:34:41.967123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:87680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000cff480 len:0x10000 key:0x181b00
00:08:42.444  [2024-12-14 13:34:41.967134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:42.444  [2024-12-14 13:34:41.967149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000cef3c0 len:0x10000 key:0x181b00
00:08:42.445  [2024-12-14 13:34:41.967160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:42.445  [2024-12-14 13:34:41.967174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000cdf300 len:0x10000 key:0x181b00
00:08:42.445  [2024-12-14 13:34:41.967187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:42.445  [2024-12-14 13:34:41.967201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000ccf240 len:0x10000 key:0x181b00
00:08:42.445  [2024-12-14 13:34:41.967213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:42.445  [2024-12-14 13:34:41.967227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000cbf180 len:0x10000 key:0x181b00
00:08:42.445  [2024-12-14 13:34:41.967238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:42.445  [2024-12-14 13:34:41.967252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000caf0c0 len:0x10000 key:0x181b00
00:08:42.445  [2024-12-14 13:34:41.967264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:42.445  [2024-12-14 13:34:41.967278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c9f000 len:0x10000 key:0x181b00
00:08:42.445  [2024-12-14 13:34:41.967290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:42.445  [2024-12-14 13:34:41.967303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:88576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c8ef40 len:0x10000 key:0x181b00
00:08:42.445  [2024-12-14 13:34:41.967315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:42.445  [2024-12-14 13:34:41.967329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c7ee80 len:0x10000 key:0x181b00
00:08:42.445  [2024-12-14 13:34:41.967341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:42.445  [2024-12-14 13:34:41.967354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:88832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c6edc0 len:0x10000 key:0x181b00
00:08:42.445  [2024-12-14 13:34:41.967366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:42.445  [2024-12-14 13:34:41.967380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c5ed00 len:0x10000 key:0x181b00
00:08:42.445  [2024-12-14 13:34:41.967392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:42.445  [2024-12-14 13:34:41.967406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:89088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c4ec40 len:0x10000 key:0x181b00
00:08:42.445  [2024-12-14 13:34:41.967419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:42.445  [2024-12-14 13:34:41.967433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:89216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c3eb80 len:0x10000 key:0x181b00
00:08:42.445  [2024-12-14 13:34:41.967445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:42.445  [2024-12-14 13:34:41.967459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:89344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c2eac0 len:0x10000 key:0x181b00
00:08:42.445  [2024-12-14 13:34:41.967470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:42.445  [2024-12-14 13:34:41.967485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:89472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c1ea00 len:0x10000 key:0x181b00
00:08:42.445  [2024-12-14 13:34:41.967497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:42.445  [2024-12-14 13:34:41.967511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:89600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c0e940 len:0x10000 key:0x181b00
00:08:42.445  [2024-12-14 13:34:41.967523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:42.445  [2024-12-14 13:34:41.967537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:89728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000feffc0 len:0x10000 key:0x181f00
00:08:42.445  [2024-12-14 13:34:41.967549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:42.445  [2024-12-14 13:34:41.967563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:89856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000fdff00 len:0x10000 key:0x181f00
00:08:42.445  [2024-12-14 13:34:41.967575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:42.445  [2024-12-14 13:34:41.967588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:89984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000beffc0 len:0x10000 key:0x181800
00:08:42.445  [2024-12-14 13:34:41.967600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:42.445  [2024-12-14 13:34:41.967614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c420000 len:0x10000 key:0x182900
00:08:42.445  [2024-12-14 13:34:41.967626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:42.445  [2024-12-14 13:34:41.967640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:82048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c441000 len:0x10000 key:0x182900
00:08:42.445  [2024-12-14 13:34:41.967652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:42.445  [2024-12-14 13:34:41.967666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c462000 len:0x10000 key:0x182900
00:08:42.445  [2024-12-14 13:34:41.967677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:42.445  [2024-12-14 13:34:41.967691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c483000 len:0x10000 key:0x182900
00:08:42.445  [2024-12-14 13:34:41.967703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:42.445  [2024-12-14 13:34:41.967720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c4a4000 len:0x10000 key:0x182900
00:08:42.445  [2024-12-14 13:34:41.967732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:42.445  [2024-12-14 13:34:41.967747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c4c5000 len:0x10000 key:0x182900
00:08:42.445  [2024-12-14 13:34:41.967759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:42.445  [2024-12-14 13:34:41.967775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c4e6000 len:0x10000 key:0x182900
00:08:42.445  [2024-12-14 13:34:41.967789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:42.445  [2024-12-14 13:34:41.967803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c507000 len:0x10000 key:0x182900
00:08:42.445  [2024-12-14 13:34:41.967815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:42.445  [2024-12-14 13:34:41.967830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:82944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c528000 len:0x10000 key:0x182900
00:08:42.445  [2024-12-14 13:34:41.967850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:42.445  [2024-12-14 13:34:41.967864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c549000 len:0x10000 key:0x182900
00:08:42.445  [2024-12-14 13:34:41.967877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:42.445  [2024-12-14 13:34:41.967891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:83200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c56a000 len:0x10000 key:0x182900
00:08:42.445  [2024-12-14 13:34:41.967902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:42.445  [2024-12-14 13:34:41.967917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c58b000 len:0x10000 key:0x182900
00:08:42.445  [2024-12-14 13:34:41.967933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:42.445  [2024-12-14 13:34:41.967947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:83456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c5ac000 len:0x10000 key:0x182900
00:08:42.445  [2024-12-14 13:34:41.967959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:42.445  [2024-12-14 13:34:41.967974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:83584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c5cd000 len:0x10000 key:0x182900
00:08:42.445  [2024-12-14 13:34:41.967985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:42.445  [2024-12-14 13:34:41.967999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bdf0000 len:0x10000 key:0x182900
00:08:42.445  [2024-12-14 13:34:41.968011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:42.445  [2024-12-14 13:34:41.968026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c3ff000 len:0x10000 key:0x182900
00:08:42.445  [2024-12-14 13:34:41.968037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:42.445  [2024-12-14 13:34:41.968051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:83968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c7fe000 len:0x10000 key:0x182900
00:08:42.445  [2024-12-14 13:34:41.968063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:42.445  [2024-12-14 13:34:41.971276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller
00:08:42.445   13:34:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3167566
00:08:42.445   13:34:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004
00:08:42.445   13:34:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1
00:08:42.445    13:34:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0
00:08:42.445    13:34:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=()
00:08:42.445    13:34:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config
00:08:42.445    13:34:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:08:42.446    13:34:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:08:42.446  {
00:08:42.446    "params": {
00:08:42.446      "name": "Nvme$subsystem",
00:08:42.446      "trtype": "$TEST_TRANSPORT",
00:08:42.446      "traddr": "$NVMF_FIRST_TARGET_IP",
00:08:42.446      "adrfam": "ipv4",
00:08:42.446      "trsvcid": "$NVMF_PORT",
00:08:42.446      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:08:42.446      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:08:42.446      "hdgst": ${hdgst:-false},
00:08:42.446      "ddgst": ${ddgst:-false}
00:08:42.446    },
00:08:42.446    "method": "bdev_nvme_attach_controller"
00:08:42.446  }
00:08:42.446  EOF
00:08:42.446  )")
00:08:42.446     13:34:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat
00:08:42.446    13:34:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq .
00:08:42.446     13:34:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=,
00:08:42.446     13:34:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:08:42.446    "params": {
00:08:42.446      "name": "Nvme0",
00:08:42.446      "trtype": "rdma",
00:08:42.446      "traddr": "192.168.100.8",
00:08:42.446      "adrfam": "ipv4",
00:08:42.446      "trsvcid": "4420",
00:08:42.446      "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:08:42.446      "hostnqn": "nqn.2016-06.io.spdk:host0",
00:08:42.446      "hdgst": false,
00:08:42.446      "ddgst": false
00:08:42.446    },
00:08:42.446    "method": "bdev_nvme_attach_controller"
00:08:42.446  }'
00:08:42.446  [2024-12-14 13:34:42.061601] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:08:42.446  [2024-12-14 13:34:42.061694] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3167898 ]
00:08:42.705  [2024-12-14 13:34:42.197582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:42.705  [2024-12-14 13:34:42.308016] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:08:43.274  Running I/O for 1 seconds...
00:08:44.211       2688.00 IOPS,   168.00 MiB/s
00:08:44.211                                                                                                  Latency(us)
00:08:44.211  
[2024-12-14T12:34:43.949Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:08:44.211  Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:08:44.211  	 Verification LBA range: start 0x0 length 0x400
00:08:44.211  	 Nvme0n1             :       1.02    2739.62     171.23       0.00     0.00   22872.47    1179.65   46556.77
00:08:44.211  
[2024-12-14T12:34:43.949Z]  ===================================================================================================================
00:08:44.211  
[2024-12-14T12:34:43.949Z]  Total                       :               2739.62     171.23       0.00     0.00   22872.47    1179.65   46556.77
00:08:45.150  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 3167566 Killed                  $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}"
00:08:45.150   13:34:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget
00:08:45.150   13:34:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state
00:08:45.150   13:34:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf
00:08:45.150   13:34:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt
00:08:45.150   13:34:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini
00:08:45.150   13:34:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup
00:08:45.150   13:34:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync
00:08:45.150   13:34:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' rdma == tcp ']'
00:08:45.150   13:34:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' rdma == rdma ']'
00:08:45.150   13:34:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e
00:08:45.150   13:34:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20}
00:08:45.150   13:34:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma
00:08:45.150  rmmod nvme_rdma
00:08:45.150  rmmod nvme_fabrics
00:08:45.150   13:34:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:08:45.150   13:34:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e
00:08:45.150   13:34:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0
00:08:45.150   13:34:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3167254 ']'
00:08:45.150   13:34:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3167254
00:08:45.150   13:34:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 3167254 ']'
00:08:45.150   13:34:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 3167254
00:08:45.150    13:34:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname
00:08:45.150   13:34:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:08:45.150    13:34:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3167254
00:08:45.150   13:34:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:08:45.150   13:34:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:08:45.150   13:34:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3167254'
00:08:45.150  killing process with pid 3167254
00:08:45.150   13:34:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 3167254
00:08:45.150   13:34:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 3167254
00:08:47.055  [2024-12-14 13:34:46.547518] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2
00:08:47.055   13:34:46 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:08:47.055   13:34:46 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]]
00:08:47.055   13:34:46 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT
00:08:47.055  
00:08:47.055  real	0m14.502s
00:08:47.055  user	0m35.201s
00:08:47.055  sys	0m6.219s
00:08:47.055   13:34:46 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:47.055   13:34:46 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:08:47.055  ************************************
00:08:47.055  END TEST nvmf_host_management
00:08:47.055  ************************************
00:08:47.055   13:34:46 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma
00:08:47.055   13:34:46 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:08:47.055   13:34:46 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:47.055   13:34:46 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x
00:08:47.055  ************************************
00:08:47.055  START TEST nvmf_lvol
00:08:47.055  ************************************
00:08:47.055   13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma
00:08:47.315  * Looking for test storage...
00:08:47.315  * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target
00:08:47.315    13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:08:47.315     13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version
00:08:47.315     13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:08:47.315    13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:08:47.315    13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:08:47.315    13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l
00:08:47.315    13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l
00:08:47.315    13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-:
00:08:47.315    13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1
00:08:47.315    13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-:
00:08:47.315    13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2
00:08:47.315    13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<'
00:08:47.315    13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2
00:08:47.315    13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1
00:08:47.315    13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:08:47.315    13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in
00:08:47.315    13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1
00:08:47.315    13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 ))
00:08:47.315    13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:08:47.315     13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1
00:08:47.315     13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1
00:08:47.315     13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:47.315     13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1
00:08:47.315    13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1
00:08:47.315     13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2
00:08:47.315     13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2
00:08:47.315     13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:47.315     13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2
00:08:47.315    13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2
00:08:47.315    13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:08:47.315    13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:08:47.315    13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0
00:08:47.315    13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:08:47.315    13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:08:47.315  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:47.315  		--rc genhtml_branch_coverage=1
00:08:47.315  		--rc genhtml_function_coverage=1
00:08:47.315  		--rc genhtml_legend=1
00:08:47.315  		--rc geninfo_all_blocks=1
00:08:47.315  		--rc geninfo_unexecuted_blocks=1
00:08:47.315  		
00:08:47.315  		'
00:08:47.315    13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:08:47.315  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:47.315  		--rc genhtml_branch_coverage=1
00:08:47.315  		--rc genhtml_function_coverage=1
00:08:47.316  		--rc genhtml_legend=1
00:08:47.316  		--rc geninfo_all_blocks=1
00:08:47.316  		--rc geninfo_unexecuted_blocks=1
00:08:47.316  		
00:08:47.316  		'
00:08:47.316    13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:08:47.316  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:47.316  		--rc genhtml_branch_coverage=1
00:08:47.316  		--rc genhtml_function_coverage=1
00:08:47.316  		--rc genhtml_legend=1
00:08:47.316  		--rc geninfo_all_blocks=1
00:08:47.316  		--rc geninfo_unexecuted_blocks=1
00:08:47.316  		
00:08:47.316  		'
00:08:47.316    13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:08:47.316  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:47.316  		--rc genhtml_branch_coverage=1
00:08:47.316  		--rc genhtml_function_coverage=1
00:08:47.316  		--rc genhtml_legend=1
00:08:47.316  		--rc geninfo_all_blocks=1
00:08:47.316  		--rc geninfo_unexecuted_blocks=1
00:08:47.316  		
00:08:47.316  		'
00:08:47.316   13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh
00:08:47.316     13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s
00:08:47.316    13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:08:47.316    13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:08:47.316    13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:08:47.316    13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:08:47.316    13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:08:47.316    13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:08:47.316    13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:08:47.316    13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:08:47.316    13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:08:47.316     13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:08:47.316    13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:08:47.316    13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e
00:08:47.316    13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:08:47.316    13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:08:47.316    13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:08:47.316    13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:08:47.316    13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh
00:08:47.316     13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob
00:08:47.316     13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:08:47.316     13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:08:47.316     13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:08:47.316      13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:08:47.316      13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:08:47.316      13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:08:47.316      13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH
00:08:47.316      13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:08:47.316    13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0
00:08:47.316    13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:08:47.316    13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:08:47.316    13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:08:47.316    13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:08:47.316    13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:08:47.316    13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:08:47.316  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:08:47.316    13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:08:47.316    13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:08:47.316    13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0
00:08:47.316   13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64
00:08:47.316   13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:08:47.316   13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20
00:08:47.316   13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30
00:08:47.316   13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py
00:08:47.316   13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit
00:08:47.316   13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z rdma ']'
00:08:47.316   13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:08:47.316   13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs
00:08:47.316   13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no
00:08:47.316   13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns
00:08:47.316   13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:08:47.316   13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:08:47.316    13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:08:47.316   13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:08:47.316   13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:08:47.316   13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable
00:08:47.316   13:34:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=()
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=()
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=()
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=()
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=()
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=()
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=()
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ rdma == rdma ]]
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}")
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}")
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]]
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}")
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)'
00:08:53.885  Found 0000:d9:00.0 (0x15b3 - 0x1015)
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)'
00:08:53.885  Found 0000:d9:00.1 (0x15b3 - 0x1015)
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]]
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0'
00:08:53.885  Found net devices under 0000:d9:00.0: mlx_0_0
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1'
00:08:53.885  Found net devices under 0000:d9:00.1: mlx_0_1
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ rdma == tcp ]]
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@447 -- # [[ rdma == rdma ]]
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # rdma_device_init
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@529 -- # load_ib_rdma_modules
00:08:53.885    13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@62 -- # uname
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']'
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@66 -- # modprobe ib_cm
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@67 -- # modprobe ib_core
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@68 -- # modprobe ib_umad
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@69 -- # modprobe ib_uverbs
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@70 -- # modprobe iw_cm
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@71 -- # modprobe rdma_cm
00:08:53.885   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@72 -- # modprobe rdma_ucm
00:08:54.145   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@530 -- # allocate_nic_ips
00:08:54.145   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR ))
00:08:54.145    13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # get_rdma_if_list
00:08:54.145    13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:08:54.145    13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:08:54.145     13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:08:54.145     13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:08:54.145    13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:08:54.145    13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:08:54.145    13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:08:54.145    13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:08:54.145    13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_0
00:08:54.145    13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2
00:08:54.145    13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:08:54.145    13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:08:54.145    13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:08:54.145    13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:08:54.145    13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:08:54.145    13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_1
00:08:54.145    13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2
00:08:54.145   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:08:54.145    13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0
00:08:54.145    13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:08:54.145    13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:08:54.145    13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}'
00:08:54.145    13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1
00:08:54.145   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # ip=192.168.100.8
00:08:54.145   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]]
00:08:54.145   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@85 -- # ip addr show mlx_0_0
00:08:54.145  6: mlx_0_0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:08:54.145      link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff
00:08:54.145      altname enp217s0f0np0
00:08:54.145      altname ens818f0np0
00:08:54.145      inet 192.168.100.8/24 scope global mlx_0_0
00:08:54.145         valid_lft forever preferred_lft forever
00:08:54.145   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:08:54.145    13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1
00:08:54.145    13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:08:54.145    13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:08:54.145    13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}'
00:08:54.145    13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1
00:08:54.145   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # ip=192.168.100.9
00:08:54.145   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]]
00:08:54.145   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@85 -- # ip addr show mlx_0_1
00:08:54.145  7: mlx_0_1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:08:54.145      link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff
00:08:54.145      altname enp217s0f1np1
00:08:54.145      altname ens818f1np1
00:08:54.145      inet 192.168.100.9/24 scope global mlx_0_1
00:08:54.145         valid_lft forever preferred_lft forever
00:08:54.145   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0
00:08:54.145   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:08:54.145   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma'
00:08:54.145   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]]
00:08:54.145    13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # get_available_rdma_ips
00:08:54.145     13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # get_rdma_if_list
00:08:54.145     13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:08:54.145     13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:08:54.145      13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:08:54.145      13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:08:54.145     13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:08:54.145     13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:08:54.145     13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:08:54.145     13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:08:54.145     13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_0
00:08:54.145     13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2
00:08:54.145     13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:08:54.145     13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:08:54.145     13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:08:54.145     13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:08:54.145     13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:08:54.145     13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_1
00:08:54.145     13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2
00:08:54.145    13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:08:54.145    13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0
00:08:54.145    13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:08:54.145    13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1
00:08:54.145    13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:08:54.145    13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}'
00:08:54.145    13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:08:54.145    13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1
00:08:54.145    13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:08:54.145    13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:08:54.145    13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}'
00:08:54.145    13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1
00:08:54.145   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8
00:08:54.145  192.168.100.9'
00:08:54.145    13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # echo '192.168.100.8
00:08:54.145  192.168.100.9'
00:08:54.145    13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # head -n 1
00:08:54.145   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8
00:08:54.145    13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # head -n 1
00:08:54.146    13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # echo '192.168.100.8
00:08:54.146  192.168.100.9'
00:08:54.146    13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # tail -n +2
00:08:54.146   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9
00:08:54.146   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']'
00:08:54.146   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024'
00:08:54.146   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' rdma == tcp ']'
00:08:54.146   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' rdma == rdma ']'
00:08:54.146   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-rdma
00:08:54.146   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7
00:08:54.146   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:08:54.146   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable
00:08:54.146   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x
00:08:54.146   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3172091
00:08:54.146   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7
00:08:54.146   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3172091
00:08:54.146   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 3172091 ']'
00:08:54.146   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:54.146   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:54.146   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:54.146  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:54.146   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:54.146   13:34:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x
00:08:54.404  [2024-12-14 13:34:53.908143] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:08:54.405  [2024-12-14 13:34:53.908228] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:08:54.405  [2024-12-14 13:34:54.039011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:08:54.405  [2024-12-14 13:34:54.139069] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:08:54.405  [2024-12-14 13:34:54.139120] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:08:54.405  [2024-12-14 13:34:54.139133] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:08:54.405  [2024-12-14 13:34:54.139149] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:08:54.405  [2024-12-14 13:34:54.139160] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:08:54.405  [2024-12-14 13:34:54.141627] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:08:54.405  [2024-12-14 13:34:54.141637] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:08:54.405  [2024-12-14 13:34:54.141656] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:08:54.972   13:34:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:54.972   13:34:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0
00:08:54.972   13:34:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:08:54.972   13:34:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable
00:08:54.972   13:34:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x
00:08:55.231   13:34:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:08:55.231   13:34:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192
00:08:55.231  [2024-12-14 13:34:54.947608] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028540/0x7fd6441a4940) succeed.
00:08:55.231  [2024-12-14 13:34:54.956777] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000286c0/0x7fd64415d940) succeed.
00:08:55.493    13:34:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:08:55.754   13:34:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 '
00:08:55.754    13:34:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:08:56.013   13:34:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1
00:08:56.013   13:34:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1'
00:08:56.272    13:34:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs
00:08:56.531   13:34:56 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=6c8da315-f88a-4853-b385-8d600c042ea0
00:08:56.531    13:34:56 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6c8da315-f88a-4853-b385-8d600c042ea0 lvol 20
00:08:56.790   13:34:56 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=4ed45f62-6fc9-4676-b421-792d3ab0b0d5
00:08:56.790   13:34:56 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0
00:08:56.790   13:34:56 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4ed45f62-6fc9-4676-b421-792d3ab0b0d5
00:08:57.049   13:34:56 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420
00:08:57.308  [2024-12-14 13:34:56.837140] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 ***
00:08:57.308   13:34:56 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420
00:08:57.567   13:34:57 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3172657
00:08:57.567   13:34:57 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18
00:08:57.567   13:34:57 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1
00:08:58.504    13:34:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 4ed45f62-6fc9-4676-b421-792d3ab0b0d5 MY_SNAPSHOT
00:08:58.763   13:34:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=4b40c11f-a82b-4336-9eb4-1d6e9fe6feae
00:08:58.763   13:34:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 4ed45f62-6fc9-4676-b421-792d3ab0b0d5 30
00:08:59.023    13:34:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 4b40c11f-a82b-4336-9eb4-1d6e9fe6feae MY_CLONE
00:08:59.023   13:34:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=05c28cc5-fd7f-4105-80d4-369a86eda1ab
00:08:59.023   13:34:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 05c28cc5-fd7f-4105-80d4-369a86eda1ab
00:08:59.289   13:34:59 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3172657
00:09:09.449  Initializing NVMe Controllers
00:09:09.449  Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0
00:09:09.449  Controller IO queue size 128, less than required.
00:09:09.449  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:09:09.449  Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3
00:09:09.449  Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4
00:09:09.449  Initialization complete. Launching workers.
00:09:09.449  ========================================================
00:09:09.449                                                                                                                     Latency(us)
00:09:09.449  Device Information                                                             :       IOPS      MiB/s    Average        min        max
00:09:09.449  RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core  3:   15146.10      59.16    8452.06    3625.44  122572.30
00:09:09.449  RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core  4:   15111.50      59.03    8470.11      79.26  107112.80
00:09:09.449  ========================================================
00:09:09.449  Total                                                                          :   30257.60     118.19    8461.07      79.26  122572.30
00:09:09.449  
00:09:09.449   13:35:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0
00:09:09.449   13:35:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4ed45f62-6fc9-4676-b421-792d3ab0b0d5
00:09:09.449   13:35:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6c8da315-f88a-4853-b385-8d600c042ea0
00:09:09.449   13:35:09 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f
00:09:09.449   13:35:09 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT
00:09:09.449   13:35:09 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini
00:09:09.449   13:35:09 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup
00:09:09.449   13:35:09 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync
00:09:09.449   13:35:09 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' rdma == tcp ']'
00:09:09.449   13:35:09 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' rdma == rdma ']'
00:09:09.449   13:35:09 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e
00:09:09.449   13:35:09 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20}
00:09:09.449   13:35:09 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma
00:09:09.449  rmmod nvme_rdma
00:09:09.449  rmmod nvme_fabrics
00:09:09.449   13:35:09 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:09:09.709   13:35:09 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e
00:09:09.709   13:35:09 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0
00:09:09.709   13:35:09 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3172091 ']'
00:09:09.709   13:35:09 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3172091
00:09:09.709   13:35:09 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 3172091 ']'
00:09:09.709   13:35:09 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 3172091
00:09:09.709    13:35:09 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname
00:09:09.709   13:35:09 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:09.709    13:35:09 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3172091
00:09:09.709   13:35:09 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:09.709   13:35:09 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:09.709   13:35:09 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3172091'
00:09:09.709  killing process with pid 3172091
00:09:09.709   13:35:09 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 3172091
00:09:09.709   13:35:09 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 3172091
00:09:11.616   13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:09:11.616   13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]]
00:09:11.616  
00:09:11.616  real	0m24.435s
00:09:11.616  user	1m16.635s
00:09:11.616  sys	0m6.746s
00:09:11.616   13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:11.616   13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x
00:09:11.616  ************************************
00:09:11.616  END TEST nvmf_lvol
00:09:11.616  ************************************
00:09:11.616   13:35:11 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma
00:09:11.616   13:35:11 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:09:11.616   13:35:11 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:11.616   13:35:11 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x
00:09:11.616  ************************************
00:09:11.616  START TEST nvmf_lvs_grow
00:09:11.616  ************************************
00:09:11.616   13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma
00:09:11.616  * Looking for test storage...
00:09:11.616  * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target
00:09:11.616    13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:09:11.616     13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version
00:09:11.616     13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:09:11.876    13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:09:11.876    13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:09:11.876    13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l
00:09:11.876    13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l
00:09:11.876    13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-:
00:09:11.876    13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1
00:09:11.876    13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-:
00:09:11.876    13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2
00:09:11.876    13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<'
00:09:11.876    13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2
00:09:11.876    13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1
00:09:11.876    13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:09:11.876    13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in
00:09:11.876    13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1
00:09:11.876    13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 ))
00:09:11.876    13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:11.876     13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1
00:09:11.876     13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1
00:09:11.876     13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:11.876     13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1
00:09:11.876    13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1
00:09:11.876     13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2
00:09:11.876     13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2
00:09:11.876     13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:11.876     13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2
00:09:11.876    13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2
00:09:11.876    13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:09:11.876    13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:09:11.876    13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0
00:09:11.876    13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:11.876    13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:09:11.876  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:11.876  		--rc genhtml_branch_coverage=1
00:09:11.876  		--rc genhtml_function_coverage=1
00:09:11.876  		--rc genhtml_legend=1
00:09:11.876  		--rc geninfo_all_blocks=1
00:09:11.876  		--rc geninfo_unexecuted_blocks=1
00:09:11.876  		
00:09:11.876  		'
00:09:11.876    13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:09:11.876  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:11.876  		--rc genhtml_branch_coverage=1
00:09:11.876  		--rc genhtml_function_coverage=1
00:09:11.876  		--rc genhtml_legend=1
00:09:11.876  		--rc geninfo_all_blocks=1
00:09:11.876  		--rc geninfo_unexecuted_blocks=1
00:09:11.876  		
00:09:11.876  		'
00:09:11.876    13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:09:11.876  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:11.876  		--rc genhtml_branch_coverage=1
00:09:11.876  		--rc genhtml_function_coverage=1
00:09:11.876  		--rc genhtml_legend=1
00:09:11.876  		--rc geninfo_all_blocks=1
00:09:11.876  		--rc geninfo_unexecuted_blocks=1
00:09:11.876  		
00:09:11.876  		'
00:09:11.876    13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:09:11.876  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:11.876  		--rc genhtml_branch_coverage=1
00:09:11.876  		--rc genhtml_function_coverage=1
00:09:11.876  		--rc genhtml_legend=1
00:09:11.876  		--rc geninfo_all_blocks=1
00:09:11.876  		--rc geninfo_unexecuted_blocks=1
00:09:11.876  		
00:09:11.876  		'
00:09:11.876   13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh
00:09:11.876     13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s
00:09:11.876    13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:09:11.876    13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:09:11.876    13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:09:11.876    13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:09:11.877    13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:09:11.877    13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:09:11.877    13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:09:11.877    13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:09:11.877    13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:09:11.877     13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:09:11.877    13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:09:11.877    13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e
00:09:11.877    13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:09:11.877    13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:09:11.877    13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:09:11.877    13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:09:11.877    13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh
00:09:11.877     13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob
00:09:11.877     13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:09:11.877     13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:09:11.877     13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:09:11.877      13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:09:11.877      13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:09:11.877      13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:09:11.877      13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH
00:09:11.877      13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:09:11.877    13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0
00:09:11.877    13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:09:11.877    13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:09:11.877    13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:09:11.877    13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:09:11.877    13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:09:11.877    13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:09:11.877  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:09:11.877    13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:09:11.877    13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:09:11.877    13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0
00:09:11.877   13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py
00:09:11.877   13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock
00:09:11.877   13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit
00:09:11.877   13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z rdma ']'
00:09:11.877   13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:09:11.877   13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs
00:09:11.877   13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no
00:09:11.877   13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns
00:09:11.877   13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:09:11.877   13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:09:11.877    13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:09:11.877   13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:09:11.877   13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:09:11.877   13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable
00:09:11.877   13:35:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=()
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=()
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=()
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=()
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=()
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=()
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=()
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ rdma == rdma ]]
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}")
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}")
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]]
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}")
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)'
00:09:18.508  Found 0000:d9:00.0 (0x15b3 - 0x1015)
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)'
00:09:18.508  Found 0000:d9:00.1 (0x15b3 - 0x1015)
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]]
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0'
00:09:18.508  Found net devices under 0000:d9:00.0: mlx_0_0
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1'
00:09:18.508  Found net devices under 0000:d9:00.1: mlx_0_1
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ rdma == tcp ]]
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@447 -- # [[ rdma == rdma ]]
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # rdma_device_init
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@529 -- # load_ib_rdma_modules
00:09:18.508    13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@62 -- # uname
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']'
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@66 -- # modprobe ib_cm
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@67 -- # modprobe ib_core
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@68 -- # modprobe ib_umad
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@69 -- # modprobe ib_uverbs
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@70 -- # modprobe iw_cm
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@71 -- # modprobe rdma_cm
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@72 -- # modprobe rdma_ucm
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@530 -- # allocate_nic_ips
00:09:18.508   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR ))
00:09:18.508    13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # get_rdma_if_list
00:09:18.508    13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:09:18.508    13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:09:18.508     13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:09:18.508     13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:09:18.508    13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:09:18.508    13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:09:18.508    13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:09:18.508    13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:09:18.508    13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_0
00:09:18.508    13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2
00:09:18.508    13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:09:18.508    13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:09:18.509    13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:09:18.509    13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:09:18.509    13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:09:18.509    13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_1
00:09:18.509    13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2
00:09:18.509   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:09:18.509    13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0
00:09:18.509    13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:09:18.509    13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:09:18.509    13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}'
00:09:18.509    13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1
00:09:18.509   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # ip=192.168.100.8
00:09:18.509   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]]
00:09:18.509   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@85 -- # ip addr show mlx_0_0
00:09:18.509  6: mlx_0_0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:09:18.509      link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff
00:09:18.509      altname enp217s0f0np0
00:09:18.509      altname ens818f0np0
00:09:18.509      inet 192.168.100.8/24 scope global mlx_0_0
00:09:18.509         valid_lft forever preferred_lft forever
00:09:18.509   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:09:18.509    13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1
00:09:18.509    13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:09:18.509    13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:09:18.509    13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}'
00:09:18.509    13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1
00:09:18.509   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # ip=192.168.100.9
00:09:18.509   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]]
00:09:18.509   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@85 -- # ip addr show mlx_0_1
00:09:18.509  7: mlx_0_1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:09:18.509      link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff
00:09:18.509      altname enp217s0f1np1
00:09:18.509      altname ens818f1np1
00:09:18.509      inet 192.168.100.9/24 scope global mlx_0_1
00:09:18.509         valid_lft forever preferred_lft forever
00:09:18.509   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0
00:09:18.509   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:09:18.509   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma'
00:09:18.509   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]]
00:09:18.509    13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # get_available_rdma_ips
00:09:18.509     13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # get_rdma_if_list
00:09:18.509     13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:09:18.509     13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:09:18.509      13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:09:18.509      13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:09:18.509     13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:09:18.509     13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:09:18.509     13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:09:18.509     13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:09:18.509     13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_0
00:09:18.509     13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2
00:09:18.509     13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:09:18.509     13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:09:18.509     13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:09:18.509     13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:09:18.509     13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:09:18.509     13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_1
00:09:18.509     13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2
00:09:18.509    13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:09:18.509    13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0
00:09:18.509    13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:09:18.509    13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:09:18.768    13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}'
00:09:18.768    13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1
00:09:18.768    13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:09:18.768    13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1
00:09:18.768    13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:09:18.768    13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:09:18.768    13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}'
00:09:18.768    13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1
00:09:18.768   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8
00:09:18.768  192.168.100.9'
00:09:18.768    13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # echo '192.168.100.8
00:09:18.768  192.168.100.9'
00:09:18.768    13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # head -n 1
00:09:18.768   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8
00:09:18.768    13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # echo '192.168.100.8
00:09:18.768  192.168.100.9'
00:09:18.768    13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # tail -n +2
00:09:18.768    13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # head -n 1
00:09:18.768   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9
00:09:18.768   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']'
00:09:18.768   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024'
00:09:18.768   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' rdma == tcp ']'
00:09:18.768   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' rdma == rdma ']'
00:09:18.768   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-rdma
00:09:18.768   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1
00:09:18.768   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:09:18.768   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable
00:09:18.768   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x
00:09:18.768   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3178496
00:09:18.768   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1
00:09:18.768   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3178496
00:09:18.768   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 3178496 ']'
00:09:18.768   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:18.768   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:18.769   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:18.769  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:18.769   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:18.769   13:35:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x
00:09:18.769  [2024-12-14 13:35:18.415050] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:09:18.769  [2024-12-14 13:35:18.415151] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:09:19.028  [2024-12-14 13:35:18.549564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:19.028  [2024-12-14 13:35:18.642253] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:09:19.028  [2024-12-14 13:35:18.642298] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:09:19.028  [2024-12-14 13:35:18.642310] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:09:19.028  [2024-12-14 13:35:18.642322] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:09:19.028  [2024-12-14 13:35:18.642332] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:09:19.028  [2024-12-14 13:35:18.643573] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:09:19.596   13:35:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:19.596   13:35:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0
00:09:19.596   13:35:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:09:19.596   13:35:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable
00:09:19.596   13:35:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x
00:09:19.596   13:35:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:09:19.596   13:35:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192
00:09:19.856  [2024-12-14 13:35:19.448715] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028540/0x7f20d4f1d940) succeed.
00:09:19.856  [2024-12-14 13:35:19.457940] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000286c0/0x7f20d4dbd940) succeed.
00:09:19.856   13:35:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow
00:09:19.856   13:35:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:19.856   13:35:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:19.856   13:35:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x
00:09:20.115  ************************************
00:09:20.115  START TEST lvs_grow_clean
00:09:20.115  ************************************
00:09:20.115   13:35:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow
00:09:20.115   13:35:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol
00:09:20.115   13:35:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters
00:09:20.115   13:35:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid
00:09:20.115   13:35:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200
00:09:20.115   13:35:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400
00:09:20.115   13:35:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150
00:09:20.115   13:35:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev
00:09:20.115   13:35:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev
00:09:20.115    13:35:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096
00:09:20.115   13:35:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev
00:09:20.115    13:35:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs
00:09:20.374   13:35:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=0cadbf7d-6b94-4978-884b-0a45fbe5716a
00:09:20.374    13:35:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0cadbf7d-6b94-4978-884b-0a45fbe5716a
00:09:20.374    13:35:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters'
00:09:20.633   13:35:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49
00:09:20.633   13:35:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 ))
00:09:20.633    13:35:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0cadbf7d-6b94-4978-884b-0a45fbe5716a lvol 150
00:09:20.893   13:35:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=32945ee2-0f19-4d96-9fc8-1ba02f5fb426
00:09:20.893   13:35:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev
00:09:20.893   13:35:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev
00:09:20.893  [2024-12-14 13:35:20.561958] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400
00:09:20.893  [2024-12-14 13:35:20.562029] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1
00:09:20.893  true
00:09:20.893    13:35:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0cadbf7d-6b94-4978-884b-0a45fbe5716a
00:09:20.893    13:35:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters'
00:09:21.152   13:35:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 ))
00:09:21.152   13:35:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0
00:09:21.411   13:35:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 32945ee2-0f19-4d96-9fc8-1ba02f5fb426
00:09:21.670   13:35:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420
00:09:21.670  [2024-12-14 13:35:21.328580] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 ***
00:09:21.670   13:35:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420
00:09:21.929   13:35:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z
00:09:21.929   13:35:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3179077
00:09:21.929   13:35:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:09:21.929   13:35:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3179077 /var/tmp/bdevperf.sock
00:09:21.929   13:35:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 3179077 ']'
00:09:21.929   13:35:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:09:21.929   13:35:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:21.929   13:35:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:09:21.929  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:09:21.929   13:35:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:21.929   13:35:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x
00:09:21.929  [2024-12-14 13:35:21.612502] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:09:21.929  [2024-12-14 13:35:21.612585] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3179077 ]
00:09:22.188  [2024-12-14 13:35:21.742061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:22.188  [2024-12-14 13:35:21.841986] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:09:22.756   13:35:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:22.756   13:35:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0
00:09:22.756   13:35:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0
00:09:23.016  Nvme0n1
00:09:23.016   13:35:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000
00:09:23.275  [
00:09:23.275    {
00:09:23.275      "name": "Nvme0n1",
00:09:23.275      "aliases": [
00:09:23.275        "32945ee2-0f19-4d96-9fc8-1ba02f5fb426"
00:09:23.275      ],
00:09:23.275      "product_name": "NVMe disk",
00:09:23.275      "block_size": 4096,
00:09:23.275      "num_blocks": 38912,
00:09:23.275      "uuid": "32945ee2-0f19-4d96-9fc8-1ba02f5fb426",
00:09:23.275      "numa_id": 1,
00:09:23.275      "assigned_rate_limits": {
00:09:23.275        "rw_ios_per_sec": 0,
00:09:23.275        "rw_mbytes_per_sec": 0,
00:09:23.275        "r_mbytes_per_sec": 0,
00:09:23.275        "w_mbytes_per_sec": 0
00:09:23.275      },
00:09:23.275      "claimed": false,
00:09:23.275      "zoned": false,
00:09:23.275      "supported_io_types": {
00:09:23.275        "read": true,
00:09:23.275        "write": true,
00:09:23.275        "unmap": true,
00:09:23.275        "flush": true,
00:09:23.275        "reset": true,
00:09:23.275        "nvme_admin": true,
00:09:23.275        "nvme_io": true,
00:09:23.275        "nvme_io_md": false,
00:09:23.275        "write_zeroes": true,
00:09:23.275        "zcopy": false,
00:09:23.275        "get_zone_info": false,
00:09:23.275        "zone_management": false,
00:09:23.275        "zone_append": false,
00:09:23.275        "compare": true,
00:09:23.275        "compare_and_write": true,
00:09:23.275        "abort": true,
00:09:23.275        "seek_hole": false,
00:09:23.275        "seek_data": false,
00:09:23.275        "copy": true,
00:09:23.275        "nvme_iov_md": false
00:09:23.275      },
00:09:23.275      "memory_domains": [
00:09:23.275        {
00:09:23.275          "dma_device_id": "SPDK_RDMA_DMA_DEVICE",
00:09:23.275          "dma_device_type": 0
00:09:23.275        }
00:09:23.275      ],
00:09:23.275      "driver_specific": {
00:09:23.275        "nvme": [
00:09:23.275          {
00:09:23.275            "trid": {
00:09:23.275              "trtype": "RDMA",
00:09:23.275              "adrfam": "IPv4",
00:09:23.275              "traddr": "192.168.100.8",
00:09:23.275              "trsvcid": "4420",
00:09:23.275              "subnqn": "nqn.2016-06.io.spdk:cnode0"
00:09:23.275            },
00:09:23.275            "ctrlr_data": {
00:09:23.275              "cntlid": 1,
00:09:23.275              "vendor_id": "0x8086",
00:09:23.275              "model_number": "SPDK bdev Controller",
00:09:23.275              "serial_number": "SPDK0",
00:09:23.275              "firmware_revision": "25.01",
00:09:23.275              "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:09:23.275              "oacs": {
00:09:23.275                "security": 0,
00:09:23.275                "format": 0,
00:09:23.275                "firmware": 0,
00:09:23.275                "ns_manage": 0
00:09:23.275              },
00:09:23.275              "multi_ctrlr": true,
00:09:23.275              "ana_reporting": false
00:09:23.275            },
00:09:23.275            "vs": {
00:09:23.275              "nvme_version": "1.3"
00:09:23.276            },
00:09:23.276            "ns_data": {
00:09:23.276              "id": 1,
00:09:23.276              "can_share": true
00:09:23.276            }
00:09:23.276          }
00:09:23.276        ],
00:09:23.276        "mp_policy": "active_passive"
00:09:23.276      }
00:09:23.276    }
00:09:23.276  ]
00:09:23.276   13:35:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3179343
00:09:23.276   13:35:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests
00:09:23.276   13:35:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2
00:09:23.276  Running I/O for 10 seconds...
00:09:24.654                                                                                                  Latency(us)
00:09:24.654  
[2024-12-14T12:35:24.393Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:09:24.655  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:09:24.655  	 Nvme0n1             :       1.00   30305.00     118.38       0.00     0.00       0.00       0.00       0.00
00:09:24.655  
[2024-12-14T12:35:24.393Z]  ===================================================================================================================
00:09:24.655  
[2024-12-14T12:35:24.393Z]  Total                       :              30305.00     118.38       0.00     0.00       0.00       0.00       0.00
00:09:24.655  
00:09:25.223   13:35:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0cadbf7d-6b94-4978-884b-0a45fbe5716a
00:09:25.482  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:09:25.482  	 Nvme0n1             :       2.00   30576.50     119.44       0.00     0.00       0.00       0.00       0.00
00:09:25.482  
[2024-12-14T12:35:25.220Z]  ===================================================================================================================
00:09:25.482  
[2024-12-14T12:35:25.220Z]  Total                       :              30576.50     119.44       0.00     0.00       0.00       0.00       0.00
00:09:25.482  
00:09:25.482  true
00:09:25.482    13:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0cadbf7d-6b94-4978-884b-0a45fbe5716a
00:09:25.482    13:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters'
00:09:25.741   13:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99
00:09:25.741   13:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 ))
00:09:25.741   13:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3179343
00:09:26.309  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:09:26.309  	 Nvme0n1             :       3.00   30613.67     119.58       0.00     0.00       0.00       0.00       0.00
00:09:26.309  
[2024-12-14T12:35:26.047Z]  ===================================================================================================================
00:09:26.309  
[2024-12-14T12:35:26.047Z]  Total                       :              30613.67     119.58       0.00     0.00       0.00       0.00       0.00
00:09:26.309  
00:09:27.687  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:09:27.687  	 Nvme0n1             :       4.00   30696.25     119.91       0.00     0.00       0.00       0.00       0.00
00:09:27.687  
[2024-12-14T12:35:27.425Z]  ===================================================================================================================
00:09:27.687  
[2024-12-14T12:35:27.425Z]  Total                       :              30696.25     119.91       0.00     0.00       0.00       0.00       0.00
00:09:27.687  
00:09:28.624  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:09:28.624  	 Nvme0n1             :       5.00   30784.80     120.25       0.00     0.00       0.00       0.00       0.00
00:09:28.624  
[2024-12-14T12:35:28.362Z]  ===================================================================================================================
00:09:28.624  
[2024-12-14T12:35:28.362Z]  Total                       :              30784.80     120.25       0.00     0.00       0.00       0.00       0.00
00:09:28.624  
00:09:29.562  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:09:29.562  	 Nvme0n1             :       6.00   30838.00     120.46       0.00     0.00       0.00       0.00       0.00
00:09:29.562  
[2024-12-14T12:35:29.300Z]  ===================================================================================================================
00:09:29.562  
[2024-12-14T12:35:29.300Z]  Total                       :              30838.00     120.46       0.00     0.00       0.00       0.00       0.00
00:09:29.562  
00:09:30.496  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:09:30.496  	 Nvme0n1             :       7.00   30875.43     120.61       0.00     0.00       0.00       0.00       0.00
00:09:30.496  
[2024-12-14T12:35:30.234Z]  ===================================================================================================================
00:09:30.496  
[2024-12-14T12:35:30.234Z]  Total                       :              30875.43     120.61       0.00     0.00       0.00       0.00       0.00
00:09:30.496  
00:09:31.432  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:09:31.432  	 Nvme0n1             :       8.00   30811.88     120.36       0.00     0.00       0.00       0.00       0.00
00:09:31.432  
[2024-12-14T12:35:31.170Z]  ===================================================================================================================
00:09:31.432  
[2024-12-14T12:35:31.170Z]  Total                       :              30811.88     120.36       0.00     0.00       0.00       0.00       0.00
00:09:31.432  
00:09:32.368  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:09:32.368  	 Nvme0n1             :       9.00   30847.89     120.50       0.00     0.00       0.00       0.00       0.00
00:09:32.368  
[2024-12-14T12:35:32.107Z]  ===================================================================================================================
00:09:32.369  
[2024-12-14T12:35:32.107Z]  Total                       :              30847.89     120.50       0.00     0.00       0.00       0.00       0.00
00:09:32.369  
00:09:33.305  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:09:33.305  	 Nvme0n1             :      10.00   30876.70     120.61       0.00     0.00       0.00       0.00       0.00
00:09:33.305  
[2024-12-14T12:35:33.043Z]  ===================================================================================================================
00:09:33.305  
[2024-12-14T12:35:33.043Z]  Total                       :              30876.70     120.61       0.00     0.00       0.00       0.00       0.00
00:09:33.305  
00:09:33.305  
00:09:33.305                                                                                                  Latency(us)
00:09:33.305  
[2024-12-14T12:35:33.043Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:09:33.305  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:09:33.305  	 Nvme0n1             :      10.00   30876.70     120.61       0.00     0.00    4142.27    2922.91   10433.33
00:09:33.305  
[2024-12-14T12:35:33.043Z]  ===================================================================================================================
00:09:33.305  
[2024-12-14T12:35:33.043Z]  Total                       :              30876.70     120.61       0.00     0.00    4142.27    2922.91   10433.33
00:09:33.305  {
00:09:33.305    "results": [
00:09:33.305      {
00:09:33.305        "job": "Nvme0n1",
00:09:33.305        "core_mask": "0x2",
00:09:33.305        "workload": "randwrite",
00:09:33.305        "status": "finished",
00:09:33.305        "queue_depth": 128,
00:09:33.305        "io_size": 4096,
00:09:33.305        "runtime": 10.004145,
00:09:33.305        "iops": 30876.701607183822,
00:09:33.305        "mibps": 120.6121156530618,
00:09:33.305        "io_failed": 0,
00:09:33.305        "io_timeout": 0,
00:09:33.305        "avg_latency_us": 4142.267870741837,
00:09:33.305        "min_latency_us": 2922.9056,
00:09:33.305        "max_latency_us": 10433.3312
00:09:33.305      }
00:09:33.305    ],
00:09:33.305    "core_count": 1
00:09:33.305  }
00:09:33.565   13:35:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3179077
00:09:33.565   13:35:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 3179077 ']'
00:09:33.565   13:35:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 3179077
00:09:33.565    13:35:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname
00:09:33.565   13:35:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:33.565    13:35:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3179077
00:09:33.565   13:35:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:09:33.565   13:35:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:09:33.565   13:35:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3179077'
00:09:33.565  killing process with pid 3179077
00:09:33.565   13:35:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 3179077
00:09:33.565  Received shutdown signal, test time was about 10.000000 seconds
00:09:33.565  
00:09:33.565                                                                                                  Latency(us)
00:09:33.565  
[2024-12-14T12:35:33.303Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:09:33.565  
[2024-12-14T12:35:33.303Z]  ===================================================================================================================
00:09:33.565  
[2024-12-14T12:35:33.303Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:09:33.565   13:35:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 3179077
00:09:34.503   13:35:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420
00:09:34.503   13:35:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0
00:09:34.762    13:35:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0cadbf7d-6b94-4978-884b-0a45fbe5716a
00:09:34.762    13:35:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters'
00:09:35.021   13:35:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61
00:09:35.021   13:35:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]]
00:09:35.021   13:35:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev
00:09:35.021  [2024-12-14 13:35:34.746259] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs
00:09:35.281   13:35:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0cadbf7d-6b94-4978-884b-0a45fbe5716a
00:09:35.281   13:35:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0
00:09:35.281   13:35:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0cadbf7d-6b94-4978-884b-0a45fbe5716a
00:09:35.281   13:35:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py
00:09:35.281   13:35:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:09:35.281    13:35:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py
00:09:35.281   13:35:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:09:35.281    13:35:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py
00:09:35.281   13:35:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:09:35.281   13:35:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py
00:09:35.281   13:35:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]]
00:09:35.281   13:35:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0cadbf7d-6b94-4978-884b-0a45fbe5716a
00:09:35.281  request:
00:09:35.281  {
00:09:35.281    "uuid": "0cadbf7d-6b94-4978-884b-0a45fbe5716a",
00:09:35.281    "method": "bdev_lvol_get_lvstores",
00:09:35.281    "req_id": 1
00:09:35.281  }
00:09:35.281  Got JSON-RPC error response
00:09:35.281  response:
00:09:35.281  {
00:09:35.281    "code": -19,
00:09:35.281    "message": "No such device"
00:09:35.281  }
00:09:35.281   13:35:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1
00:09:35.281   13:35:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:09:35.281   13:35:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:09:35.281   13:35:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:09:35.281   13:35:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096
00:09:35.540  aio_bdev
00:09:35.540   13:35:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 32945ee2-0f19-4d96-9fc8-1ba02f5fb426
00:09:35.540   13:35:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=32945ee2-0f19-4d96-9fc8-1ba02f5fb426
00:09:35.540   13:35:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout=
00:09:35.540   13:35:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i
00:09:35.540   13:35:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]]
00:09:35.540   13:35:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000
00:09:35.540   13:35:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine
00:09:35.799   13:35:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 32945ee2-0f19-4d96-9fc8-1ba02f5fb426 -t 2000
00:09:36.058  [
00:09:36.058    {
00:09:36.058      "name": "32945ee2-0f19-4d96-9fc8-1ba02f5fb426",
00:09:36.058      "aliases": [
00:09:36.058        "lvs/lvol"
00:09:36.058      ],
00:09:36.058      "product_name": "Logical Volume",
00:09:36.058      "block_size": 4096,
00:09:36.058      "num_blocks": 38912,
00:09:36.058      "uuid": "32945ee2-0f19-4d96-9fc8-1ba02f5fb426",
00:09:36.058      "assigned_rate_limits": {
00:09:36.058        "rw_ios_per_sec": 0,
00:09:36.058        "rw_mbytes_per_sec": 0,
00:09:36.058        "r_mbytes_per_sec": 0,
00:09:36.058        "w_mbytes_per_sec": 0
00:09:36.058      },
00:09:36.058      "claimed": false,
00:09:36.058      "zoned": false,
00:09:36.058      "supported_io_types": {
00:09:36.058        "read": true,
00:09:36.058        "write": true,
00:09:36.058        "unmap": true,
00:09:36.058        "flush": false,
00:09:36.058        "reset": true,
00:09:36.058        "nvme_admin": false,
00:09:36.058        "nvme_io": false,
00:09:36.058        "nvme_io_md": false,
00:09:36.058        "write_zeroes": true,
00:09:36.058        "zcopy": false,
00:09:36.058        "get_zone_info": false,
00:09:36.058        "zone_management": false,
00:09:36.058        "zone_append": false,
00:09:36.058        "compare": false,
00:09:36.058        "compare_and_write": false,
00:09:36.058        "abort": false,
00:09:36.058        "seek_hole": true,
00:09:36.058        "seek_data": true,
00:09:36.058        "copy": false,
00:09:36.058        "nvme_iov_md": false
00:09:36.058      },
00:09:36.058      "driver_specific": {
00:09:36.058        "lvol": {
00:09:36.058          "lvol_store_uuid": "0cadbf7d-6b94-4978-884b-0a45fbe5716a",
00:09:36.058          "base_bdev": "aio_bdev",
00:09:36.058          "thin_provision": false,
00:09:36.058          "num_allocated_clusters": 38,
00:09:36.058          "snapshot": false,
00:09:36.058          "clone": false,
00:09:36.058          "esnap_clone": false
00:09:36.058        }
00:09:36.058      }
00:09:36.058    }
00:09:36.058  ]
00:09:36.058   13:35:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0
00:09:36.058    13:35:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0cadbf7d-6b94-4978-884b-0a45fbe5716a
00:09:36.058    13:35:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters'
00:09:36.058   13:35:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 ))
00:09:36.058    13:35:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0cadbf7d-6b94-4978-884b-0a45fbe5716a
00:09:36.058    13:35:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters'
00:09:36.317   13:35:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 ))
00:09:36.318   13:35:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 32945ee2-0f19-4d96-9fc8-1ba02f5fb426
00:09:36.576   13:35:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0cadbf7d-6b94-4978-884b-0a45fbe5716a
00:09:36.836   13:35:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev
00:09:36.836   13:35:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev
00:09:36.836  
00:09:36.836  real	0m16.942s
00:09:36.836  user	0m16.742s
00:09:36.836  sys	0m1.380s
00:09:36.836   13:35:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:36.836   13:35:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x
00:09:36.836  ************************************
00:09:36.836  END TEST lvs_grow_clean
00:09:36.836  ************************************
00:09:37.095   13:35:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty
00:09:37.095   13:35:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:09:37.095   13:35:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:37.095   13:35:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x
00:09:37.095  ************************************
00:09:37.095  START TEST lvs_grow_dirty
00:09:37.095  ************************************
00:09:37.095   13:35:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty
00:09:37.095   13:35:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol
00:09:37.095   13:35:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters
00:09:37.095   13:35:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid
00:09:37.095   13:35:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200
00:09:37.095   13:35:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400
00:09:37.095   13:35:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150
00:09:37.095   13:35:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev
00:09:37.095   13:35:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev
00:09:37.095    13:35:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096
00:09:37.354   13:35:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev
00:09:37.354    13:35:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs
00:09:37.354   13:35:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=11e0d369-0bff-4c20-9df8-1897bd71e04b
00:09:37.354    13:35:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 11e0d369-0bff-4c20-9df8-1897bd71e04b
00:09:37.355    13:35:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters'
00:09:37.614   13:35:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49
00:09:37.614   13:35:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 ))
00:09:37.614    13:35:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 11e0d369-0bff-4c20-9df8-1897bd71e04b lvol 150
00:09:37.873   13:35:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=811c7fdd-6e8c-4366-bf95-2c7430c56653
00:09:37.873   13:35:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev
00:09:37.873   13:35:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev
00:09:37.873  [2024-12-14 13:35:37.578724] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400
00:09:37.873  [2024-12-14 13:35:37.578793] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1
00:09:37.873  true
00:09:37.873    13:35:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 11e0d369-0bff-4c20-9df8-1897bd71e04b
00:09:37.873    13:35:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters'
00:09:38.132   13:35:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 ))
00:09:38.132   13:35:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0
00:09:38.391   13:35:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 811c7fdd-6e8c-4366-bf95-2c7430c56653
00:09:38.650   13:35:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420
00:09:38.650  [2024-12-14 13:35:38.329284] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 ***
00:09:38.650   13:35:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420
00:09:38.909   13:35:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3182072
00:09:38.909   13:35:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z
00:09:38.909   13:35:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:09:38.909   13:35:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3182072 /var/tmp/bdevperf.sock
00:09:38.909   13:35:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3182072 ']'
00:09:38.909   13:35:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:09:38.909   13:35:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:38.909   13:35:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:09:38.909  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:09:38.909   13:35:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:38.909   13:35:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x
00:09:38.909  [2024-12-14 13:35:38.614549] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:09:38.909  [2024-12-14 13:35:38.614646] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3182072 ]
00:09:39.169  [2024-12-14 13:35:38.746372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:39.169  [2024-12-14 13:35:38.848937] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:09:39.737   13:35:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:39.737   13:35:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0
00:09:39.737   13:35:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0
00:09:39.995  Nvme0n1
00:09:39.996   13:35:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000
00:09:40.255  [
00:09:40.255    {
00:09:40.255      "name": "Nvme0n1",
00:09:40.255      "aliases": [
00:09:40.255        "811c7fdd-6e8c-4366-bf95-2c7430c56653"
00:09:40.255      ],
00:09:40.255      "product_name": "NVMe disk",
00:09:40.255      "block_size": 4096,
00:09:40.255      "num_blocks": 38912,
00:09:40.255      "uuid": "811c7fdd-6e8c-4366-bf95-2c7430c56653",
00:09:40.255      "numa_id": 1,
00:09:40.255      "assigned_rate_limits": {
00:09:40.255        "rw_ios_per_sec": 0,
00:09:40.255        "rw_mbytes_per_sec": 0,
00:09:40.255        "r_mbytes_per_sec": 0,
00:09:40.255        "w_mbytes_per_sec": 0
00:09:40.255      },
00:09:40.255      "claimed": false,
00:09:40.255      "zoned": false,
00:09:40.255      "supported_io_types": {
00:09:40.255        "read": true,
00:09:40.255        "write": true,
00:09:40.255        "unmap": true,
00:09:40.255        "flush": true,
00:09:40.255        "reset": true,
00:09:40.255        "nvme_admin": true,
00:09:40.255        "nvme_io": true,
00:09:40.255        "nvme_io_md": false,
00:09:40.255        "write_zeroes": true,
00:09:40.255        "zcopy": false,
00:09:40.255        "get_zone_info": false,
00:09:40.255        "zone_management": false,
00:09:40.255        "zone_append": false,
00:09:40.255        "compare": true,
00:09:40.255        "compare_and_write": true,
00:09:40.255        "abort": true,
00:09:40.255        "seek_hole": false,
00:09:40.255        "seek_data": false,
00:09:40.255        "copy": true,
00:09:40.255        "nvme_iov_md": false
00:09:40.255      },
00:09:40.255      "memory_domains": [
00:09:40.255        {
00:09:40.255          "dma_device_id": "SPDK_RDMA_DMA_DEVICE",
00:09:40.255          "dma_device_type": 0
00:09:40.255        }
00:09:40.255      ],
00:09:40.255      "driver_specific": {
00:09:40.255        "nvme": [
00:09:40.255          {
00:09:40.255            "trid": {
00:09:40.255              "trtype": "RDMA",
00:09:40.255              "adrfam": "IPv4",
00:09:40.255              "traddr": "192.168.100.8",
00:09:40.255              "trsvcid": "4420",
00:09:40.255              "subnqn": "nqn.2016-06.io.spdk:cnode0"
00:09:40.255            },
00:09:40.255            "ctrlr_data": {
00:09:40.255              "cntlid": 1,
00:09:40.255              "vendor_id": "0x8086",
00:09:40.255              "model_number": "SPDK bdev Controller",
00:09:40.255              "serial_number": "SPDK0",
00:09:40.255              "firmware_revision": "25.01",
00:09:40.255              "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:09:40.255              "oacs": {
00:09:40.255                "security": 0,
00:09:40.255                "format": 0,
00:09:40.255                "firmware": 0,
00:09:40.255                "ns_manage": 0
00:09:40.255              },
00:09:40.255              "multi_ctrlr": true,
00:09:40.255              "ana_reporting": false
00:09:40.255            },
00:09:40.255            "vs": {
00:09:40.255              "nvme_version": "1.3"
00:09:40.255            },
00:09:40.255            "ns_data": {
00:09:40.255              "id": 1,
00:09:40.255              "can_share": true
00:09:40.255            }
00:09:40.255          }
00:09:40.255        ],
00:09:40.255        "mp_policy": "active_passive"
00:09:40.255      }
00:09:40.255    }
00:09:40.255  ]
00:09:40.255   13:35:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3182342
00:09:40.255   13:35:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2
00:09:40.255   13:35:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests
00:09:40.255  Running I/O for 10 seconds...
00:09:41.633                                                                                                  Latency(us)
00:09:41.633  
[2024-12-14T12:35:41.371Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:09:41.633  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:09:41.633  	 Nvme0n1             :       1.00   29568.00     115.50       0.00     0.00       0.00       0.00       0.00
00:09:41.633  
[2024-12-14T12:35:41.371Z]  ===================================================================================================================
00:09:41.633  
[2024-12-14T12:35:41.371Z]  Total                       :              29568.00     115.50       0.00     0.00       0.00       0.00       0.00
00:09:41.633  
00:09:42.202   13:35:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 11e0d369-0bff-4c20-9df8-1897bd71e04b
00:09:42.461  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:09:42.461  	 Nvme0n1             :       2.00   30144.00     117.75       0.00     0.00       0.00       0.00       0.00
00:09:42.461  
[2024-12-14T12:35:42.199Z]  ===================================================================================================================
00:09:42.461  
[2024-12-14T12:35:42.199Z]  Total                       :              30144.00     117.75       0.00     0.00       0.00       0.00       0.00
00:09:42.461  
00:09:42.461  true
00:09:42.461    13:35:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 11e0d369-0bff-4c20-9df8-1897bd71e04b
00:09:42.461    13:35:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters'
00:09:42.720   13:35:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99
00:09:42.720   13:35:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 ))
00:09:42.720   13:35:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3182342
00:09:43.289  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:09:43.289  	 Nvme0n1             :       3.00   30272.33     118.25       0.00     0.00       0.00       0.00       0.00
00:09:43.289  
[2024-12-14T12:35:43.027Z]  ===================================================================================================================
00:09:43.289  
[2024-12-14T12:35:43.027Z]  Total                       :              30272.33     118.25       0.00     0.00       0.00       0.00       0.00
00:09:43.289  
00:09:44.226  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:09:44.227  	 Nvme0n1             :       4.00   30439.75     118.91       0.00     0.00       0.00       0.00       0.00
00:09:44.227  
[2024-12-14T12:35:43.965Z]  ===================================================================================================================
00:09:44.227  
[2024-12-14T12:35:43.965Z]  Total                       :              30439.75     118.91       0.00     0.00       0.00       0.00       0.00
00:09:44.227  
00:09:45.604  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:09:45.604  	 Nvme0n1             :       5.00   30559.60     119.37       0.00     0.00       0.00       0.00       0.00
00:09:45.604  
[2024-12-14T12:35:45.342Z]  ===================================================================================================================
00:09:45.604  
[2024-12-14T12:35:45.342Z]  Total                       :              30559.60     119.37       0.00     0.00       0.00       0.00       0.00
00:09:45.604  
00:09:46.546  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:09:46.546  	 Nvme0n1             :       6.00   30635.00     119.67       0.00     0.00       0.00       0.00       0.00
00:09:46.546  
[2024-12-14T12:35:46.284Z]  ===================================================================================================================
00:09:46.546  
[2024-12-14T12:35:46.284Z]  Total                       :              30635.00     119.67       0.00     0.00       0.00       0.00       0.00
00:09:46.546  
00:09:47.482  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:09:47.482  	 Nvme0n1             :       7.00   30697.14     119.91       0.00     0.00       0.00       0.00       0.00
00:09:47.482  
[2024-12-14T12:35:47.220Z]  ===================================================================================================================
00:09:47.482  
[2024-12-14T12:35:47.220Z]  Total                       :              30697.14     119.91       0.00     0.00       0.00       0.00       0.00
00:09:47.482  
00:09:48.419  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:09:48.419  	 Nvme0n1             :       8.00   30688.00     119.88       0.00     0.00       0.00       0.00       0.00
00:09:48.419  
[2024-12-14T12:35:48.157Z]  ===================================================================================================================
00:09:48.419  
[2024-12-14T12:35:48.157Z]  Total                       :              30688.00     119.88       0.00     0.00       0.00       0.00       0.00
00:09:48.419  
00:09:49.357  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:09:49.357  	 Nvme0n1             :       9.00   30730.56     120.04       0.00     0.00       0.00       0.00       0.00
00:09:49.357  
[2024-12-14T12:35:49.095Z]  ===================================================================================================================
00:09:49.357  
[2024-12-14T12:35:49.095Z]  Total                       :              30730.56     120.04       0.00     0.00       0.00       0.00       0.00
00:09:49.357  
00:09:50.294  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:09:50.294  	 Nvme0n1             :      10.00   30767.90     120.19       0.00     0.00       0.00       0.00       0.00
00:09:50.294  
[2024-12-14T12:35:50.032Z]  ===================================================================================================================
00:09:50.294  
[2024-12-14T12:35:50.032Z]  Total                       :              30767.90     120.19       0.00     0.00       0.00       0.00       0.00
00:09:50.294  
00:09:50.294  
00:09:50.294                                                                                                  Latency(us)
00:09:50.294  
[2024-12-14T12:35:50.032Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:09:50.294  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:09:50.294  	 Nvme0n1             :      10.00   30766.53     120.18       0.00     0.00    4156.91    3014.66   18664.65
00:09:50.294  
[2024-12-14T12:35:50.032Z]  ===================================================================================================================
00:09:50.294  
[2024-12-14T12:35:50.032Z]  Total                       :              30766.53     120.18       0.00     0.00    4156.91    3014.66   18664.65
00:09:50.294  {
00:09:50.294    "results": [
00:09:50.294      {
00:09:50.294        "job": "Nvme0n1",
00:09:50.294        "core_mask": "0x2",
00:09:50.294        "workload": "randwrite",
00:09:50.294        "status": "finished",
00:09:50.294        "queue_depth": 128,
00:09:50.294        "io_size": 4096,
00:09:50.294        "runtime": 10.003629,
00:09:50.294        "iops": 30766.534824512186,
00:09:50.294        "mibps": 120.18177665825073,
00:09:50.294        "io_failed": 0,
00:09:50.294        "io_timeout": 0,
00:09:50.294        "avg_latency_us": 4156.914216888202,
00:09:50.294        "min_latency_us": 3014.656,
00:09:50.294        "max_latency_us": 18664.6528
00:09:50.294      }
00:09:50.294    ],
00:09:50.294    "core_count": 1
00:09:50.294  }
00:09:50.294   13:35:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3182072
00:09:50.294   13:35:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 3182072 ']'
00:09:50.294   13:35:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 3182072
00:09:50.294    13:35:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname
00:09:50.294   13:35:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:50.294    13:35:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3182072
00:09:50.552   13:35:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:09:50.553   13:35:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:09:50.553   13:35:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3182072'
00:09:50.553  killing process with pid 3182072
00:09:50.553   13:35:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 3182072
00:09:50.553  Received shutdown signal, test time was about 10.000000 seconds
00:09:50.553  
00:09:50.553                                                                                                  Latency(us)
00:09:50.553  
[2024-12-14T12:35:50.291Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:09:50.553  
[2024-12-14T12:35:50.291Z]  ===================================================================================================================
00:09:50.553  
[2024-12-14T12:35:50.291Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:09:50.553   13:35:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 3182072
00:09:51.485   13:35:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420
00:09:51.485   13:35:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0
00:09:51.743    13:35:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 11e0d369-0bff-4c20-9df8-1897bd71e04b
00:09:51.743    13:35:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters'
00:09:52.002   13:35:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61
00:09:52.002   13:35:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]]
00:09:52.002   13:35:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3178496
00:09:52.002   13:35:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3178496
00:09:52.002  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3178496 Killed                  "${NVMF_APP[@]}" "$@"
00:09:52.002   13:35:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true
00:09:52.002   13:35:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1
00:09:52.002   13:35:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:09:52.002   13:35:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable
00:09:52.002   13:35:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x
00:09:52.002   13:35:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3184370
00:09:52.002   13:35:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3184370
00:09:52.002   13:35:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1
00:09:52.002   13:35:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3184370 ']'
00:09:52.002   13:35:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:52.002   13:35:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:52.002   13:35:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:52.002  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:52.002   13:35:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:52.002   13:35:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x
00:09:52.002  [2024-12-14 13:35:51.673036] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:09:52.002  [2024-12-14 13:35:51.673141] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:09:52.261  [2024-12-14 13:35:51.814196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:52.261  [2024-12-14 13:35:51.909456] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:09:52.261  [2024-12-14 13:35:51.909507] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:09:52.261  [2024-12-14 13:35:51.909519] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:09:52.261  [2024-12-14 13:35:51.909531] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:09:52.261  [2024-12-14 13:35:51.909540] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:09:52.261  [2024-12-14 13:35:51.910878] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:09:52.827   13:35:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:52.827   13:35:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0
00:09:52.827   13:35:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:09:52.827   13:35:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable
00:09:52.827   13:35:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x
00:09:52.827   13:35:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:09:52.827    13:35:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096
00:09:53.086  [2024-12-14 13:35:52.671389] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore
00:09:53.086  [2024-12-14 13:35:52.671523] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0
00:09:53.086  [2024-12-14 13:35:52.671559] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1
00:09:53.086   13:35:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev
00:09:53.086   13:35:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 811c7fdd-6e8c-4366-bf95-2c7430c56653
00:09:53.086   13:35:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=811c7fdd-6e8c-4366-bf95-2c7430c56653
00:09:53.086   13:35:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout=
00:09:53.086   13:35:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i
00:09:53.086   13:35:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]]
00:09:53.086   13:35:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000
00:09:53.086   13:35:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine
00:09:53.345   13:35:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 811c7fdd-6e8c-4366-bf95-2c7430c56653 -t 2000
00:09:53.345  [
00:09:53.345    {
00:09:53.345      "name": "811c7fdd-6e8c-4366-bf95-2c7430c56653",
00:09:53.345      "aliases": [
00:09:53.345        "lvs/lvol"
00:09:53.345      ],
00:09:53.345      "product_name": "Logical Volume",
00:09:53.345      "block_size": 4096,
00:09:53.345      "num_blocks": 38912,
00:09:53.345      "uuid": "811c7fdd-6e8c-4366-bf95-2c7430c56653",
00:09:53.345      "assigned_rate_limits": {
00:09:53.345        "rw_ios_per_sec": 0,
00:09:53.345        "rw_mbytes_per_sec": 0,
00:09:53.345        "r_mbytes_per_sec": 0,
00:09:53.345        "w_mbytes_per_sec": 0
00:09:53.345      },
00:09:53.345      "claimed": false,
00:09:53.345      "zoned": false,
00:09:53.345      "supported_io_types": {
00:09:53.345        "read": true,
00:09:53.345        "write": true,
00:09:53.345        "unmap": true,
00:09:53.345        "flush": false,
00:09:53.345        "reset": true,
00:09:53.345        "nvme_admin": false,
00:09:53.345        "nvme_io": false,
00:09:53.345        "nvme_io_md": false,
00:09:53.345        "write_zeroes": true,
00:09:53.345        "zcopy": false,
00:09:53.345        "get_zone_info": false,
00:09:53.345        "zone_management": false,
00:09:53.345        "zone_append": false,
00:09:53.345        "compare": false,
00:09:53.345        "compare_and_write": false,
00:09:53.345        "abort": false,
00:09:53.345        "seek_hole": true,
00:09:53.345        "seek_data": true,
00:09:53.345        "copy": false,
00:09:53.345        "nvme_iov_md": false
00:09:53.345      },
00:09:53.345      "driver_specific": {
00:09:53.345        "lvol": {
00:09:53.345          "lvol_store_uuid": "11e0d369-0bff-4c20-9df8-1897bd71e04b",
00:09:53.345          "base_bdev": "aio_bdev",
00:09:53.345          "thin_provision": false,
00:09:53.345          "num_allocated_clusters": 38,
00:09:53.345          "snapshot": false,
00:09:53.345          "clone": false,
00:09:53.345          "esnap_clone": false
00:09:53.345        }
00:09:53.345      }
00:09:53.345    }
00:09:53.345  ]
00:09:53.345   13:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0
00:09:53.345    13:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 11e0d369-0bff-4c20-9df8-1897bd71e04b
00:09:53.345    13:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters'
00:09:53.603   13:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 ))
00:09:53.603    13:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 11e0d369-0bff-4c20-9df8-1897bd71e04b
00:09:53.603    13:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters'
00:09:53.862   13:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 ))
00:09:53.862   13:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev
00:09:53.862  [2024-12-14 13:35:53.591537] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs
00:09:54.120   13:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 11e0d369-0bff-4c20-9df8-1897bd71e04b
00:09:54.120   13:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0
00:09:54.120   13:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 11e0d369-0bff-4c20-9df8-1897bd71e04b
00:09:54.120   13:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py
00:09:54.121   13:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:09:54.121    13:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py
00:09:54.121   13:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:09:54.121    13:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py
00:09:54.121   13:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:09:54.121   13:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py
00:09:54.121   13:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]]
00:09:54.121   13:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 11e0d369-0bff-4c20-9df8-1897bd71e04b
00:09:54.121  request:
00:09:54.121  {
00:09:54.121    "uuid": "11e0d369-0bff-4c20-9df8-1897bd71e04b",
00:09:54.121    "method": "bdev_lvol_get_lvstores",
00:09:54.121    "req_id": 1
00:09:54.121  }
00:09:54.121  Got JSON-RPC error response
00:09:54.121  response:
00:09:54.121  {
00:09:54.121    "code": -19,
00:09:54.121    "message": "No such device"
00:09:54.121  }
00:09:54.121   13:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1
00:09:54.121   13:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:09:54.121   13:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:09:54.121   13:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:09:54.121   13:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096
00:09:54.379  aio_bdev
00:09:54.379   13:35:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 811c7fdd-6e8c-4366-bf95-2c7430c56653
00:09:54.379   13:35:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=811c7fdd-6e8c-4366-bf95-2c7430c56653
00:09:54.379   13:35:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout=
00:09:54.379   13:35:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i
00:09:54.379   13:35:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]]
00:09:54.379   13:35:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000
00:09:54.379   13:35:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine
00:09:54.638   13:35:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 811c7fdd-6e8c-4366-bf95-2c7430c56653 -t 2000
00:09:54.638  [
00:09:54.638    {
00:09:54.638      "name": "811c7fdd-6e8c-4366-bf95-2c7430c56653",
00:09:54.638      "aliases": [
00:09:54.638        "lvs/lvol"
00:09:54.638      ],
00:09:54.638      "product_name": "Logical Volume",
00:09:54.638      "block_size": 4096,
00:09:54.638      "num_blocks": 38912,
00:09:54.638      "uuid": "811c7fdd-6e8c-4366-bf95-2c7430c56653",
00:09:54.638      "assigned_rate_limits": {
00:09:54.638        "rw_ios_per_sec": 0,
00:09:54.638        "rw_mbytes_per_sec": 0,
00:09:54.638        "r_mbytes_per_sec": 0,
00:09:54.638        "w_mbytes_per_sec": 0
00:09:54.638      },
00:09:54.638      "claimed": false,
00:09:54.638      "zoned": false,
00:09:54.638      "supported_io_types": {
00:09:54.638        "read": true,
00:09:54.638        "write": true,
00:09:54.638        "unmap": true,
00:09:54.638        "flush": false,
00:09:54.638        "reset": true,
00:09:54.638        "nvme_admin": false,
00:09:54.638        "nvme_io": false,
00:09:54.638        "nvme_io_md": false,
00:09:54.638        "write_zeroes": true,
00:09:54.638        "zcopy": false,
00:09:54.638        "get_zone_info": false,
00:09:54.638        "zone_management": false,
00:09:54.638        "zone_append": false,
00:09:54.638        "compare": false,
00:09:54.638        "compare_and_write": false,
00:09:54.638        "abort": false,
00:09:54.638        "seek_hole": true,
00:09:54.638        "seek_data": true,
00:09:54.638        "copy": false,
00:09:54.638        "nvme_iov_md": false
00:09:54.638      },
00:09:54.638      "driver_specific": {
00:09:54.638        "lvol": {
00:09:54.638          "lvol_store_uuid": "11e0d369-0bff-4c20-9df8-1897bd71e04b",
00:09:54.638          "base_bdev": "aio_bdev",
00:09:54.638          "thin_provision": false,
00:09:54.638          "num_allocated_clusters": 38,
00:09:54.638          "snapshot": false,
00:09:54.638          "clone": false,
00:09:54.638          "esnap_clone": false
00:09:54.638        }
00:09:54.638      }
00:09:54.638    }
00:09:54.638  ]
00:09:54.638   13:35:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0
00:09:54.638    13:35:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 11e0d369-0bff-4c20-9df8-1897bd71e04b
00:09:54.638    13:35:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters'
00:09:54.896   13:35:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 ))
00:09:54.896    13:35:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 11e0d369-0bff-4c20-9df8-1897bd71e04b
00:09:54.896    13:35:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters'
00:09:55.154   13:35:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 ))
00:09:55.154   13:35:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 811c7fdd-6e8c-4366-bf95-2c7430c56653
00:09:55.154   13:35:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 11e0d369-0bff-4c20-9df8-1897bd71e04b
00:09:55.413   13:35:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev
00:09:55.671   13:35:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev
00:09:55.671  
00:09:55.671  real	0m18.711s
00:09:55.671  user	0m48.632s
00:09:55.671  sys	0m3.512s
00:09:55.671   13:35:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:55.671   13:35:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x
00:09:55.671  ************************************
00:09:55.671  END TEST lvs_grow_dirty
00:09:55.671  ************************************
00:09:55.671   13:35:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0
00:09:55.671   13:35:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id
00:09:55.671   13:35:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0
00:09:55.671   13:35:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']'
00:09:55.671    13:35:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n'
00:09:55.671   13:35:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0
00:09:55.671   13:35:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]]
00:09:55.671   13:35:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files
00:09:55.671   13:35:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0
00:09:55.671  nvmf_trace.0
00:09:55.929   13:35:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0
00:09:55.929   13:35:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini
00:09:55.929   13:35:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup
00:09:55.929   13:35:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync
00:09:55.929   13:35:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' rdma == tcp ']'
00:09:55.929   13:35:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' rdma == rdma ']'
00:09:55.929   13:35:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e
00:09:55.929   13:35:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20}
00:09:55.929   13:35:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma
00:09:55.929  rmmod nvme_rdma
00:09:55.929  rmmod nvme_fabrics
00:09:55.930   13:35:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:09:55.930   13:35:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e
00:09:55.930   13:35:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0
00:09:55.930   13:35:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3184370 ']'
00:09:55.930   13:35:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3184370
00:09:55.930   13:35:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 3184370 ']'
00:09:55.930   13:35:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 3184370
00:09:55.930    13:35:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname
00:09:55.930   13:35:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:55.930    13:35:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3184370
00:09:55.930   13:35:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:55.930   13:35:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:55.930   13:35:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3184370'
00:09:55.930  killing process with pid 3184370
00:09:55.930   13:35:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 3184370
00:09:55.930   13:35:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 3184370
00:09:56.865   13:35:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:09:56.865   13:35:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]]
00:09:56.865  
00:09:56.865  real	0m45.354s
00:09:56.865  user	1m12.564s
00:09:56.865  sys	0m10.895s
00:09:56.865   13:35:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:56.865   13:35:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x
00:09:56.865  ************************************
00:09:56.865  END TEST nvmf_lvs_grow
00:09:56.865  ************************************
00:09:57.124   13:35:56 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma
00:09:57.124   13:35:56 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:09:57.124   13:35:56 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:57.124   13:35:56 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x
00:09:57.124  ************************************
00:09:57.124  START TEST nvmf_bdev_io_wait
00:09:57.124  ************************************
00:09:57.124   13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma
00:09:57.124  * Looking for test storage...
00:09:57.124  * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target
00:09:57.124    13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:09:57.124     13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version
00:09:57.124     13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:09:57.124    13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:09:57.124    13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:09:57.124    13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l
00:09:57.124    13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l
00:09:57.124    13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-:
00:09:57.124    13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1
00:09:57.124    13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-:
00:09:57.124    13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2
00:09:57.124    13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<'
00:09:57.124    13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2
00:09:57.124    13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1
00:09:57.124    13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:09:57.124    13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in
00:09:57.124    13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1
00:09:57.124    13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 ))
00:09:57.124    13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:57.124     13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1
00:09:57.124     13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1
00:09:57.124     13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:57.124     13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1
00:09:57.124    13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1
00:09:57.124     13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2
00:09:57.124     13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2
00:09:57.124     13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:57.124     13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2
00:09:57.124    13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2
00:09:57.124    13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:09:57.124    13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:09:57.124    13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0
00:09:57.124    13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:57.124    13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:09:57.124  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:57.124  		--rc genhtml_branch_coverage=1
00:09:57.124  		--rc genhtml_function_coverage=1
00:09:57.124  		--rc genhtml_legend=1
00:09:57.124  		--rc geninfo_all_blocks=1
00:09:57.124  		--rc geninfo_unexecuted_blocks=1
00:09:57.124  		
00:09:57.124  		'
00:09:57.124    13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:09:57.124  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:57.124  		--rc genhtml_branch_coverage=1
00:09:57.124  		--rc genhtml_function_coverage=1
00:09:57.124  		--rc genhtml_legend=1
00:09:57.124  		--rc geninfo_all_blocks=1
00:09:57.124  		--rc geninfo_unexecuted_blocks=1
00:09:57.124  		
00:09:57.124  		'
00:09:57.124    13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:09:57.124  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:57.124  		--rc genhtml_branch_coverage=1
00:09:57.124  		--rc genhtml_function_coverage=1
00:09:57.124  		--rc genhtml_legend=1
00:09:57.124  		--rc geninfo_all_blocks=1
00:09:57.124  		--rc geninfo_unexecuted_blocks=1
00:09:57.124  		
00:09:57.124  		'
00:09:57.124    13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:09:57.124  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:57.124  		--rc genhtml_branch_coverage=1
00:09:57.124  		--rc genhtml_function_coverage=1
00:09:57.124  		--rc genhtml_legend=1
00:09:57.124  		--rc geninfo_all_blocks=1
00:09:57.124  		--rc geninfo_unexecuted_blocks=1
00:09:57.124  		
00:09:57.124  		'
00:09:57.124   13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh
00:09:57.124     13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s
00:09:57.124    13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:09:57.124    13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:09:57.124    13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:09:57.124    13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:09:57.124    13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:09:57.124    13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:09:57.124    13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:09:57.124    13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:09:57.124    13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:09:57.124     13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:09:57.124    13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:09:57.124    13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e
00:09:57.124    13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:09:57.124    13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:09:57.124    13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:09:57.124    13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:09:57.124    13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh
00:09:57.124     13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob
00:09:57.383     13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:09:57.383     13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:09:57.383     13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:09:57.383      13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:09:57.383      13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:09:57.383      13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:09:57.383      13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH
00:09:57.383      13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:09:57.383    13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0
00:09:57.383    13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:09:57.383    13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:09:57.383    13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:09:57.383    13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:09:57.383    13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:09:57.383    13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:09:57.383  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:09:57.383    13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:09:57.383    13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:09:57.383    13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0
00:09:57.383   13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64
00:09:57.383   13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:09:57.383   13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit
00:09:57.383   13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z rdma ']'
00:09:57.383   13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:09:57.383   13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs
00:09:57.383   13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no
00:09:57.383   13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns
00:09:57.383   13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:09:57.383   13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:09:57.383    13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:09:57.383   13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:09:57.383   13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:09:57.383   13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable
00:09:57.383   13:35:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:10:03.948   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:10:03.948   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=()
00:10:03.948   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs
00:10:03.948   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=()
00:10:03.948   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:10:03.948   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=()
00:10:03.948   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers
00:10:03.948   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=()
00:10:03.948   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs
00:10:03.948   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=()
00:10:03.948   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810
00:10:03.948   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=()
00:10:03.948   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722
00:10:03.948   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=()
00:10:03.948   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx
00:10:03.948   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:10:03.948   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:10:03.948   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:10:03.948   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:10:03.948   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:10:03.948   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:10:03.948   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:10:03.948   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:10:03.948   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:10:03.948   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:10:03.949   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:10:03.949   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:10:03.949   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:10:03.949   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ rdma == rdma ]]
00:10:03.949   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}")
00:10:03.949   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}")
00:10:03.949   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]]
00:10:03.949   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}")
00:10:03.949   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:10:03.949   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:10:03.949   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)'
00:10:03.949  Found 0000:d9:00.0 (0x15b3 - 0x1015)
00:10:03.949   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:10:03.949   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:10:03.949   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:10:03.949   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:10:03.949   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:10:03.949   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:10:03.949   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:10:03.949   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)'
00:10:03.949  Found 0000:d9:00.1 (0x15b3 - 0x1015)
00:10:03.949   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:10:03.949   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:10:03.949   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:10:03.949   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:10:03.949   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:10:03.949   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:10:03.949   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:10:03.949   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]]
00:10:03.949   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:10:03.949   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:10:03.949   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:10:03.949   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:10:03.949   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:10:03.949   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0'
00:10:03.949  Found net devices under 0000:d9:00.0: mlx_0_0
00:10:03.949   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:10:03.949   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:10:03.949   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:10:03.949   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:10:03.949   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:10:03.949   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:10:03.949   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1'
00:10:03.949  Found net devices under 0000:d9:00.1: mlx_0_1
00:10:03.949   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:10:03.949   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:10:03.949   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes
00:10:03.949   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:10:03.949   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ rdma == tcp ]]
00:10:03.949   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@447 -- # [[ rdma == rdma ]]
00:10:03.949   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # rdma_device_init
00:10:03.949   13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@529 -- # load_ib_rdma_modules
00:10:03.949    13:36:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # uname
00:10:03.949   13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']'
00:10:03.949   13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@66 -- # modprobe ib_cm
00:10:03.949   13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@67 -- # modprobe ib_core
00:10:03.949   13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@68 -- # modprobe ib_umad
00:10:03.949   13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@69 -- # modprobe ib_uverbs
00:10:03.949   13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@70 -- # modprobe iw_cm
00:10:03.949   13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@71 -- # modprobe rdma_cm
00:10:03.949   13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@72 -- # modprobe rdma_ucm
00:10:03.949   13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@530 -- # allocate_nic_ips
00:10:03.949   13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR ))
00:10:03.949    13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # get_rdma_if_list
00:10:03.949    13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:10:03.949    13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:10:03.949     13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:10:03.949     13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:10:03.949    13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:10:03.949    13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:10:03.949    13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:10:03.949    13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:10:03.949    13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_0
00:10:03.949    13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2
00:10:03.949    13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:10:03.949    13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:10:03.949    13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:10:03.949    13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:10:03.949    13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:10:03.949    13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_1
00:10:03.949    13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2
00:10:03.949   13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:10:03.949    13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0
00:10:03.949    13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:10:03.949    13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:10:03.949    13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}'
00:10:03.949    13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1
00:10:03.949   13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # ip=192.168.100.8
00:10:03.949   13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]]
00:10:03.949   13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@85 -- # ip addr show mlx_0_0
00:10:03.949  6: mlx_0_0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:10:03.949      link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff
00:10:03.949      altname enp217s0f0np0
00:10:03.949      altname ens818f0np0
00:10:03.949      inet 192.168.100.8/24 scope global mlx_0_0
00:10:03.949         valid_lft forever preferred_lft forever
00:10:03.949   13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:10:03.949    13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1
00:10:03.949    13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:10:03.949    13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:10:03.949    13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}'
00:10:03.949    13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1
00:10:03.949   13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # ip=192.168.100.9
00:10:03.949   13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]]
00:10:03.949   13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@85 -- # ip addr show mlx_0_1
00:10:03.949  7: mlx_0_1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:10:03.949      link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff
00:10:03.949      altname enp217s0f1np1
00:10:03.949      altname ens818f1np1
00:10:03.949      inet 192.168.100.9/24 scope global mlx_0_1
00:10:03.949         valid_lft forever preferred_lft forever
00:10:03.949   13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0
00:10:03.949   13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:10:03.949   13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma'
00:10:03.949   13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]]
00:10:03.949    13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # get_available_rdma_ips
00:10:03.949     13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # get_rdma_if_list
00:10:03.949     13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:10:03.949     13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:10:03.949      13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:10:03.950      13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:10:03.950     13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:10:03.950     13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:10:03.950     13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:10:03.950     13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:10:03.950     13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_0
00:10:03.950     13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2
00:10:03.950     13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:10:03.950     13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:10:03.950     13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:10:03.950     13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:10:03.950     13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:10:03.950     13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_1
00:10:03.950     13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2
00:10:03.950    13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:10:03.950    13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0
00:10:03.950    13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:10:03.950    13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:10:03.950    13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}'
00:10:03.950    13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1
00:10:03.950    13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:10:03.950    13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1
00:10:03.950    13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:10:03.950    13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}'
00:10:03.950    13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1
00:10:03.950    13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:10:03.950   13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8
00:10:03.950  192.168.100.9'
00:10:03.950    13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # echo '192.168.100.8
00:10:03.950  192.168.100.9'
00:10:03.950    13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # head -n 1
00:10:03.950   13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8
00:10:03.950    13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # echo '192.168.100.8
00:10:03.950  192.168.100.9'
00:10:03.950    13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # tail -n +2
00:10:03.950    13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # head -n 1
00:10:03.950   13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9
00:10:03.950   13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']'
00:10:03.950   13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024'
00:10:03.950   13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' rdma == tcp ']'
00:10:03.950   13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' rdma == rdma ']'
00:10:03.950   13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-rdma
00:10:03.950   13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc
00:10:03.950   13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:10:03.950   13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable
00:10:03.950   13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:10:03.950   13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3188638
00:10:03.950   13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc
00:10:03.950   13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3188638
00:10:03.950   13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 3188638 ']'
00:10:03.950   13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:10:03.950   13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:03.950   13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:10:03.950  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:10:03.950   13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:03.950   13:36:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:10:03.950  [2024-12-14 13:36:03.302144] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:10:03.950  [2024-12-14 13:36:03.302245] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:10:03.950  [2024-12-14 13:36:03.431691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:10:03.950  [2024-12-14 13:36:03.538788] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:10:03.950  [2024-12-14 13:36:03.538834] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:10:03.950  [2024-12-14 13:36:03.538847] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:10:03.950  [2024-12-14 13:36:03.538860] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:10:03.950  [2024-12-14 13:36:03.538871] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:10:03.950  [2024-12-14 13:36:03.541672] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:10:03.950  [2024-12-14 13:36:03.541749] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:10:03.950  [2024-12-14 13:36:03.541763] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:10:03.950  [2024-12-14 13:36:03.541768] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:10:04.517   13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:04.517   13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0
00:10:04.517   13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:10:04.517   13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable
00:10:04.517   13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:10:04.517   13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:10:04.517   13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1
00:10:04.517   13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:04.517   13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:10:04.517   13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:04.517   13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init
00:10:04.517   13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:04.517   13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:10:04.776   13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:04.776   13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192
00:10:04.776   13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:04.776   13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:10:04.776  [2024-12-14 13:36:04.416031] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000286c0/0x7ffa0515c940) succeed.
00:10:04.776  [2024-12-14 13:36:04.425915] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028840/0x7ffa05117940) succeed.
00:10:05.035   13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:05.035   13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0
00:10:05.035   13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:05.035   13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:10:05.035  Malloc0
00:10:05.035   13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:05.035   13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:10:05.035   13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:05.035   13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:10:05.294   13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:05.294   13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:10:05.294   13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:05.294   13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:10:05.294   13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:05.294   13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420
00:10:05.294   13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:05.294   13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:10:05.294  [2024-12-14 13:36:04.796888] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 ***
00:10:05.294   13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:05.294   13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3189190
00:10:05.294   13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256
00:10:05.294    13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json
00:10:05.294   13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3189192
00:10:05.294    13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=()
00:10:05.294    13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config
00:10:05.294    13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:10:05.294    13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:10:05.294  {
00:10:05.294    "params": {
00:10:05.294      "name": "Nvme$subsystem",
00:10:05.294      "trtype": "$TEST_TRANSPORT",
00:10:05.294      "traddr": "$NVMF_FIRST_TARGET_IP",
00:10:05.294      "adrfam": "ipv4",
00:10:05.294      "trsvcid": "$NVMF_PORT",
00:10:05.294      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:10:05.294      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:10:05.294      "hdgst": ${hdgst:-false},
00:10:05.294      "ddgst": ${ddgst:-false}
00:10:05.294    },
00:10:05.294    "method": "bdev_nvme_attach_controller"
00:10:05.294  }
00:10:05.294  EOF
00:10:05.294  )")
00:10:05.294   13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256
00:10:05.294    13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json
00:10:05.294   13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3189194
00:10:05.294    13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=()
00:10:05.294    13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config
00:10:05.294    13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:10:05.294    13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:10:05.294  {
00:10:05.294    "params": {
00:10:05.294      "name": "Nvme$subsystem",
00:10:05.294      "trtype": "$TEST_TRANSPORT",
00:10:05.294      "traddr": "$NVMF_FIRST_TARGET_IP",
00:10:05.294      "adrfam": "ipv4",
00:10:05.294      "trsvcid": "$NVMF_PORT",
00:10:05.294      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:10:05.294      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:10:05.294      "hdgst": ${hdgst:-false},
00:10:05.294      "ddgst": ${ddgst:-false}
00:10:05.294    },
00:10:05.294    "method": "bdev_nvme_attach_controller"
00:10:05.294  }
00:10:05.294  EOF
00:10:05.294  )")
00:10:05.294   13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256
00:10:05.294    13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json
00:10:05.294   13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3189197
00:10:05.294     13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat
00:10:05.294   13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync
00:10:05.294    13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=()
00:10:05.294    13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config
00:10:05.294    13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:10:05.294    13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:10:05.294  {
00:10:05.294    "params": {
00:10:05.294      "name": "Nvme$subsystem",
00:10:05.294      "trtype": "$TEST_TRANSPORT",
00:10:05.294      "traddr": "$NVMF_FIRST_TARGET_IP",
00:10:05.294      "adrfam": "ipv4",
00:10:05.294      "trsvcid": "$NVMF_PORT",
00:10:05.294      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:10:05.294      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:10:05.294      "hdgst": ${hdgst:-false},
00:10:05.294      "ddgst": ${ddgst:-false}
00:10:05.294    },
00:10:05.294    "method": "bdev_nvme_attach_controller"
00:10:05.294  }
00:10:05.294  EOF
00:10:05.294  )")
00:10:05.294   13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256
00:10:05.294    13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json
00:10:05.294    13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=()
00:10:05.294     13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat
00:10:05.294    13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config
00:10:05.294    13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:10:05.294    13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:10:05.294  {
00:10:05.294    "params": {
00:10:05.294      "name": "Nvme$subsystem",
00:10:05.294      "trtype": "$TEST_TRANSPORT",
00:10:05.294      "traddr": "$NVMF_FIRST_TARGET_IP",
00:10:05.294      "adrfam": "ipv4",
00:10:05.294      "trsvcid": "$NVMF_PORT",
00:10:05.294      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:10:05.294      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:10:05.294      "hdgst": ${hdgst:-false},
00:10:05.294      "ddgst": ${ddgst:-false}
00:10:05.294    },
00:10:05.295    "method": "bdev_nvme_attach_controller"
00:10:05.295  }
00:10:05.295  EOF
00:10:05.295  )")
00:10:05.295     13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat
00:10:05.295   13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3189190
00:10:05.295     13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat
00:10:05.295    13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq .
00:10:05.295    13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq .
00:10:05.295    13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq .
00:10:05.295     13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=,
00:10:05.295     13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:10:05.295    "params": {
00:10:05.295      "name": "Nvme1",
00:10:05.295      "trtype": "rdma",
00:10:05.295      "traddr": "192.168.100.8",
00:10:05.295      "adrfam": "ipv4",
00:10:05.295      "trsvcid": "4420",
00:10:05.295      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:10:05.295      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:10:05.295      "hdgst": false,
00:10:05.295      "ddgst": false
00:10:05.295    },
00:10:05.295    "method": "bdev_nvme_attach_controller"
00:10:05.295  }'
00:10:05.295    13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq .
00:10:05.295     13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=,
00:10:05.295     13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:10:05.295    "params": {
00:10:05.295      "name": "Nvme1",
00:10:05.295      "trtype": "rdma",
00:10:05.295      "traddr": "192.168.100.8",
00:10:05.295      "adrfam": "ipv4",
00:10:05.295      "trsvcid": "4420",
00:10:05.295      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:10:05.295      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:10:05.295      "hdgst": false,
00:10:05.295      "ddgst": false
00:10:05.295    },
00:10:05.295    "method": "bdev_nvme_attach_controller"
00:10:05.295  }'
00:10:05.295     13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=,
00:10:05.295     13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:10:05.295    "params": {
00:10:05.295      "name": "Nvme1",
00:10:05.295      "trtype": "rdma",
00:10:05.295      "traddr": "192.168.100.8",
00:10:05.295      "adrfam": "ipv4",
00:10:05.295      "trsvcid": "4420",
00:10:05.295      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:10:05.295      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:10:05.295      "hdgst": false,
00:10:05.295      "ddgst": false
00:10:05.295    },
00:10:05.295    "method": "bdev_nvme_attach_controller"
00:10:05.295  }'
00:10:05.295     13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=,
00:10:05.295     13:36:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:10:05.295    "params": {
00:10:05.295      "name": "Nvme1",
00:10:05.295      "trtype": "rdma",
00:10:05.295      "traddr": "192.168.100.8",
00:10:05.295      "adrfam": "ipv4",
00:10:05.295      "trsvcid": "4420",
00:10:05.295      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:10:05.295      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:10:05.295      "hdgst": false,
00:10:05.295      "ddgst": false
00:10:05.295    },
00:10:05.295    "method": "bdev_nvme_attach_controller"
00:10:05.295  }'
00:10:05.295  [2024-12-14 13:36:04.884856] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:10:05.295  [2024-12-14 13:36:04.884956] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ]
00:10:05.295  [2024-12-14 13:36:04.887987] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:10:05.295  [2024-12-14 13:36:04.888079] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ]
00:10:05.295  [2024-12-14 13:36:04.888141] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:10:05.295  [2024-12-14 13:36:04.888219] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ]
00:10:05.295  [2024-12-14 13:36:04.891622] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:10:05.295  [2024-12-14 13:36:04.891703] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ]
00:10:05.553  [2024-12-14 13:36:05.136465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:05.553  [2024-12-14 13:36:05.229986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:05.553  [2024-12-14 13:36:05.238147] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4
00:10:05.810  [2024-12-14 13:36:05.332774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:05.810  [2024-12-14 13:36:05.333951] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7
00:10:05.810  [2024-12-14 13:36:05.380934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:05.810  [2024-12-14 13:36:05.430372] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6
00:10:05.810  [2024-12-14 13:36:05.476708] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5
00:10:06.068  Running I/O for 1 seconds...
00:10:06.068  Running I/O for 1 seconds...
00:10:06.068  Running I/O for 1 seconds...
00:10:06.327  Running I/O for 1 seconds...
00:10:07.152      16792.00 IOPS,    65.59 MiB/s
00:10:07.152                                                                                                  Latency(us)
00:10:07.152  
[2024-12-14T12:36:06.890Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:10:07.152  Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096)
00:10:07.152  	 Nvme1n1             :       1.01   16820.83      65.71       0.00     0.00    7583.07    5006.95   21076.38
00:10:07.152  
[2024-12-14T12:36:06.890Z]  ===================================================================================================================
00:10:07.152  
[2024-12-14T12:36:06.890Z]  Total                       :              16820.83      65.71       0.00     0.00    7583.07    5006.95   21076.38
00:10:07.153      15217.00 IOPS,    59.44 MiB/s
00:10:07.153                                                                                                  Latency(us)
00:10:07.153  
[2024-12-14T12:36:06.891Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:10:07.153  Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096)
00:10:07.153  	 Nvme1n1             :       1.01   15264.72      59.63       0.00     0.00    8357.34    4823.45   25060.97
00:10:07.153  
[2024-12-14T12:36:06.891Z]  ===================================================================================================================
00:10:07.153  
[2024-12-14T12:36:06.891Z]  Total                       :              15264.72      59.63       0.00     0.00    8357.34    4823.45   25060.97
00:10:07.153     225432.00 IOPS,   880.59 MiB/s
00:10:07.153                                                                                                  Latency(us)
00:10:07.153  
[2024-12-14T12:36:06.891Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:10:07.153  Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096)
00:10:07.153  	 Nvme1n1             :       1.00  225074.25     879.20       0.00     0.00     565.84     257.23    2608.33
00:10:07.153  
[2024-12-14T12:36:06.891Z]  ===================================================================================================================
00:10:07.153  
[2024-12-14T12:36:06.891Z]  Total                       :             225074.25     879.20       0.00     0.00     565.84     257.23    2608.33
00:10:07.153      14431.00 IOPS,    56.37 MiB/s
00:10:07.153                                                                                                  Latency(us)
00:10:07.153  
[2024-12-14T12:36:06.891Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:10:07.153  Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096)
00:10:07.153  	 Nvme1n1             :       1.01   14508.11      56.67       0.00     0.00    8798.35    3853.52   18140.36
00:10:07.153  
[2024-12-14T12:36:06.891Z]  ===================================================================================================================
00:10:07.153  
[2024-12-14T12:36:06.891Z]  Total                       :              14508.11      56.67       0.00     0.00    8798.35    3853.52   18140.36
00:10:07.718   13:36:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3189192
00:10:07.976   13:36:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3189194
00:10:07.976   13:36:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3189197
00:10:07.976   13:36:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:10:07.976   13:36:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:07.976   13:36:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:10:07.976   13:36:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:07.976   13:36:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT
00:10:07.976   13:36:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini
00:10:07.976   13:36:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup
00:10:07.976   13:36:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync
00:10:07.976   13:36:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' rdma == tcp ']'
00:10:07.976   13:36:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' rdma == rdma ']'
00:10:07.976   13:36:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e
00:10:07.976   13:36:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20}
00:10:07.976   13:36:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma
00:10:07.976  rmmod nvme_rdma
00:10:07.976  rmmod nvme_fabrics
00:10:07.976   13:36:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:10:07.976   13:36:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e
00:10:07.976   13:36:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0
00:10:07.976   13:36:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3188638 ']'
00:10:07.976   13:36:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3188638
00:10:07.976   13:36:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 3188638 ']'
00:10:07.976   13:36:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 3188638
00:10:07.976    13:36:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname
00:10:07.976   13:36:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:10:07.976    13:36:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3188638
00:10:07.976   13:36:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:10:07.976   13:36:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:10:07.976   13:36:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3188638'
00:10:07.976  killing process with pid 3188638
00:10:07.976   13:36:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 3188638
00:10:07.976   13:36:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 3188638
00:10:09.942   13:36:09 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:10:09.942   13:36:09 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]]
00:10:09.942  
00:10:09.942  real	0m12.564s
00:10:09.942  user	0m31.193s
00:10:09.942  sys	0m6.901s
00:10:09.942   13:36:09 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:09.942   13:36:09 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:10:09.942  ************************************
00:10:09.942  END TEST nvmf_bdev_io_wait
00:10:09.942  ************************************
00:10:09.942   13:36:09 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma
00:10:09.942   13:36:09 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:10:09.942   13:36:09 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:09.942   13:36:09 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x
00:10:09.942  ************************************
00:10:09.942  START TEST nvmf_queue_depth
00:10:09.942  ************************************
00:10:09.942   13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma
00:10:09.942  * Looking for test storage...
00:10:09.942  * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target
00:10:09.942    13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:10:09.942     13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version
00:10:09.942     13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:10:09.942    13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:10:09.942    13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:10:09.942    13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l
00:10:09.942    13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l
00:10:09.942    13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-:
00:10:09.942    13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1
00:10:09.942    13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-:
00:10:09.942    13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2
00:10:09.942    13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<'
00:10:09.942    13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2
00:10:09.942    13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1
00:10:09.942    13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:10:09.942    13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in
00:10:09.942    13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1
00:10:09.942    13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 ))
00:10:09.942    13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:10:09.942     13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1
00:10:09.942     13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1
00:10:09.942     13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:09.942     13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1
00:10:09.942    13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1
00:10:09.942     13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2
00:10:09.942     13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2
00:10:09.942     13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:10:09.942     13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2
00:10:09.942    13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2
00:10:09.942    13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:10:09.942    13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:10:09.942    13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0
00:10:09.942    13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:10:09.942    13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:10:09.942  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:09.942  		--rc genhtml_branch_coverage=1
00:10:09.942  		--rc genhtml_function_coverage=1
00:10:09.942  		--rc genhtml_legend=1
00:10:09.942  		--rc geninfo_all_blocks=1
00:10:09.942  		--rc geninfo_unexecuted_blocks=1
00:10:09.942  		
00:10:09.942  		'
00:10:09.942    13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:10:09.942  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:09.942  		--rc genhtml_branch_coverage=1
00:10:09.942  		--rc genhtml_function_coverage=1
00:10:09.942  		--rc genhtml_legend=1
00:10:09.942  		--rc geninfo_all_blocks=1
00:10:09.942  		--rc geninfo_unexecuted_blocks=1
00:10:09.942  		
00:10:09.942  		'
00:10:09.942    13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:10:09.942  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:09.942  		--rc genhtml_branch_coverage=1
00:10:09.942  		--rc genhtml_function_coverage=1
00:10:09.942  		--rc genhtml_legend=1
00:10:09.942  		--rc geninfo_all_blocks=1
00:10:09.942  		--rc geninfo_unexecuted_blocks=1
00:10:09.942  		
00:10:09.942  		'
00:10:09.942    13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:10:09.942  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:09.942  		--rc genhtml_branch_coverage=1
00:10:09.942  		--rc genhtml_function_coverage=1
00:10:09.942  		--rc genhtml_legend=1
00:10:09.942  		--rc geninfo_all_blocks=1
00:10:09.942  		--rc geninfo_unexecuted_blocks=1
00:10:09.942  		
00:10:09.942  		'
00:10:09.942   13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh
00:10:09.942     13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s
00:10:09.943    13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:10:09.943    13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:10:09.943    13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:10:09.943    13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:10:09.943    13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:10:09.943    13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:10:09.943    13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:10:09.943    13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:10:09.943    13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:10:09.943     13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:10:09.943    13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:10:09.943    13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e
00:10:09.943    13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:10:09.943    13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:10:09.943    13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:10:09.943    13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:10:09.943    13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh
00:10:09.943     13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob
00:10:09.943     13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:10:09.943     13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:10:09.943     13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:10:09.943      13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:09.943      13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:09.943      13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:09.943      13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH
00:10:09.943      13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:09.943    13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0
00:10:09.943    13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:10:09.943    13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:10:09.943    13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:10:09.943    13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:10:09.943    13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:10:09.943    13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:10:09.943  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:10:09.943    13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:10:09.943    13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:10:09.943    13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0
00:10:09.943   13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64
00:10:09.943   13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512
00:10:09.943   13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock
00:10:09.943   13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit
00:10:09.943   13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z rdma ']'
00:10:09.943   13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:10:09.943   13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs
00:10:09.943   13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no
00:10:09.943   13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns
00:10:09.943   13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:10:09.943   13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:10:09.943    13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:10:09.943   13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:10:09.943   13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:10:09.943   13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable
00:10:09.943   13:36:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:10:18.061   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:10:18.061   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=()
00:10:18.061   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs
00:10:18.061   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=()
00:10:18.061   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:10:18.061   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=()
00:10:18.061   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers
00:10:18.061   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=()
00:10:18.061   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs
00:10:18.061   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=()
00:10:18.061   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810
00:10:18.061   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=()
00:10:18.061   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722
00:10:18.061   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=()
00:10:18.061   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx
00:10:18.061   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:10:18.061   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:10:18.061   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:10:18.061   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:10:18.061   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:10:18.061   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:10:18.061   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:10:18.061   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:10:18.061   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:10:18.061   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:10:18.061   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:10:18.061   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:10:18.061   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:10:18.061   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ rdma == rdma ]]
00:10:18.061   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}")
00:10:18.061   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}")
00:10:18.061   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]]
00:10:18.061   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}")
00:10:18.061   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:10:18.061   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:10:18.061   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)'
00:10:18.061  Found 0000:d9:00.0 (0x15b3 - 0x1015)
00:10:18.061   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:10:18.061   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:10:18.061   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:10:18.061   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:10:18.061   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:10:18.061   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:10:18.061   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:10:18.061   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)'
00:10:18.061  Found 0000:d9:00.1 (0x15b3 - 0x1015)
00:10:18.061   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:10:18.061   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:10:18.061   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:10:18.061   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:10:18.061   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:10:18.061   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:10:18.061   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:10:18.061   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]]
00:10:18.061   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:10:18.061   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:10:18.061   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:10:18.061   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:10:18.061   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:10:18.061   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0'
00:10:18.061  Found net devices under 0000:d9:00.0: mlx_0_0
00:10:18.061   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:10:18.061   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:10:18.061   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:10:18.061   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:10:18.061   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:10:18.061   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:10:18.062   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1'
00:10:18.062  Found net devices under 0000:d9:00.1: mlx_0_1
00:10:18.062   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:10:18.062   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:10:18.062   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes
00:10:18.062   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:10:18.062   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ rdma == tcp ]]
00:10:18.062   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@447 -- # [[ rdma == rdma ]]
00:10:18.062   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # rdma_device_init
00:10:18.062   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@529 -- # load_ib_rdma_modules
00:10:18.062    13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@62 -- # uname
00:10:18.062   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']'
00:10:18.062   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@66 -- # modprobe ib_cm
00:10:18.062   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@67 -- # modprobe ib_core
00:10:18.062   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@68 -- # modprobe ib_umad
00:10:18.062   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@69 -- # modprobe ib_uverbs
00:10:18.062   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@70 -- # modprobe iw_cm
00:10:18.062   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@71 -- # modprobe rdma_cm
00:10:18.062   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@72 -- # modprobe rdma_ucm
00:10:18.062   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@530 -- # allocate_nic_ips
00:10:18.062   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR ))
00:10:18.062    13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # get_rdma_if_list
00:10:18.062    13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:10:18.062    13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:10:18.062     13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:10:18.062     13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:10:18.062    13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:10:18.062    13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:10:18.062    13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:10:18.062    13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:10:18.062    13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_0
00:10:18.062    13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2
00:10:18.062    13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:10:18.062    13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:10:18.062    13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:10:18.062    13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:10:18.062    13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:10:18.062    13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_1
00:10:18.062    13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2
00:10:18.062   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:10:18.062    13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0
00:10:18.062    13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:10:18.062    13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:10:18.062    13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}'
00:10:18.062    13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1
00:10:18.062   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # ip=192.168.100.8
00:10:18.062   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]]
00:10:18.062   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@85 -- # ip addr show mlx_0_0
00:10:18.062  6: mlx_0_0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:10:18.062      link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff
00:10:18.062      altname enp217s0f0np0
00:10:18.062      altname ens818f0np0
00:10:18.062      inet 192.168.100.8/24 scope global mlx_0_0
00:10:18.062         valid_lft forever preferred_lft forever
00:10:18.062   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:10:18.062    13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1
00:10:18.062    13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:10:18.062    13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:10:18.062    13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}'
00:10:18.062    13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1
00:10:18.062   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # ip=192.168.100.9
00:10:18.062   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]]
00:10:18.062   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@85 -- # ip addr show mlx_0_1
00:10:18.062  7: mlx_0_1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:10:18.062      link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff
00:10:18.062      altname enp217s0f1np1
00:10:18.062      altname ens818f1np1
00:10:18.062      inet 192.168.100.9/24 scope global mlx_0_1
00:10:18.062         valid_lft forever preferred_lft forever
00:10:18.062   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0
00:10:18.062   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:10:18.062   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma'
00:10:18.062   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]]
00:10:18.062    13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # get_available_rdma_ips
00:10:18.062     13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # get_rdma_if_list
00:10:18.062     13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:10:18.062     13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:10:18.062      13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:10:18.062      13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:10:18.062     13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:10:18.062     13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:10:18.062     13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:10:18.062     13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:10:18.062     13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_0
00:10:18.062     13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2
00:10:18.062     13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:10:18.062     13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:10:18.062     13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:10:18.062     13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:10:18.062     13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:10:18.062     13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_1
00:10:18.062     13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2
00:10:18.062    13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:10:18.062    13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0
00:10:18.062    13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:10:18.062    13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:10:18.062    13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}'
00:10:18.062    13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1
00:10:18.062    13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:10:18.062    13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1
00:10:18.062    13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:10:18.062    13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:10:18.062    13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}'
00:10:18.062    13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1
00:10:18.062   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8
00:10:18.062  192.168.100.9'
00:10:18.062    13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # echo '192.168.100.8
00:10:18.062  192.168.100.9'
00:10:18.062    13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # head -n 1
00:10:18.062   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8
00:10:18.062    13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # echo '192.168.100.8
00:10:18.062  192.168.100.9'
00:10:18.062    13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # tail -n +2
00:10:18.062    13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # head -n 1
00:10:18.062   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9
00:10:18.062   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']'
00:10:18.062   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024'
00:10:18.062   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' rdma == tcp ']'
00:10:18.062   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' rdma == rdma ']'
00:10:18.062   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-rdma
00:10:18.063   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2
00:10:18.063   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:10:18.063   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable
00:10:18.063   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:10:18.063   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3193607
00:10:18.063   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2
00:10:18.063   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3193607
00:10:18.063   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3193607 ']'
00:10:18.063   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:10:18.063   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:18.063   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:10:18.063  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:10:18.063   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:18.063   13:36:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:10:18.063  [2024-12-14 13:36:16.706906] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:10:18.063  [2024-12-14 13:36:16.707009] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:10:18.063  [2024-12-14 13:36:16.840944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:18.063  [2024-12-14 13:36:16.935765] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:10:18.063  [2024-12-14 13:36:16.935816] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:10:18.063  [2024-12-14 13:36:16.935830] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:10:18.063  [2024-12-14 13:36:16.935844] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:10:18.063  [2024-12-14 13:36:16.935856] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:10:18.063  [2024-12-14 13:36:16.937190] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:10:18.063   13:36:17 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:18.063   13:36:17 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0
00:10:18.063   13:36:17 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:10:18.063   13:36:17 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable
00:10:18.063   13:36:17 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:10:18.063   13:36:17 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:10:18.063   13:36:17 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192
00:10:18.063   13:36:17 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:18.063   13:36:17 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:10:18.063  [2024-12-14 13:36:17.563470] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028540/0x7fdf2c13e940) succeed.
00:10:18.063  [2024-12-14 13:36:17.572573] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000286c0/0x7fdf2bfbd940) succeed.
00:10:18.063   13:36:17 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:18.063   13:36:17 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0
00:10:18.063   13:36:17 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:18.063   13:36:17 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:10:18.063  Malloc0
00:10:18.063   13:36:17 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:18.063   13:36:17 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:10:18.063   13:36:17 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:18.063   13:36:17 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:10:18.063   13:36:17 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:18.063   13:36:17 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:10:18.063   13:36:17 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:18.063   13:36:17 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:10:18.063   13:36:17 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:18.063   13:36:17 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420
00:10:18.063   13:36:17 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:18.063   13:36:17 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:10:18.063  [2024-12-14 13:36:17.743204] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 ***
00:10:18.063   13:36:17 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:18.063   13:36:17 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3193683
00:10:18.063   13:36:17 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10
00:10:18.063   13:36:17 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:10:18.063   13:36:17 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3193683 /var/tmp/bdevperf.sock
00:10:18.063   13:36:17 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3193683 ']'
00:10:18.063   13:36:17 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:10:18.063   13:36:17 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:18.063   13:36:17 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:10:18.063  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:10:18.063   13:36:17 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:18.063   13:36:17 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:10:18.322  [2024-12-14 13:36:17.831121] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:10:18.322  [2024-12-14 13:36:17.831207] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3193683 ]
00:10:18.322  [2024-12-14 13:36:17.964552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:18.580  [2024-12-14 13:36:18.070068] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:10:19.146   13:36:18 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:19.146   13:36:18 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0
00:10:19.146   13:36:18 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1
00:10:19.146   13:36:18 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:19.146   13:36:18 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:10:19.146  NVMe0n1
00:10:19.146   13:36:18 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:19.146   13:36:18 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests
00:10:19.146  Running I/O for 10 seconds...
00:10:21.456      14978.00 IOPS,    58.51 MiB/s
[2024-12-14T12:36:22.128Z]     15123.50 IOPS,    59.08 MiB/s
[2024-12-14T12:36:23.062Z]     15360.00 IOPS,    60.00 MiB/s
[2024-12-14T12:36:24.002Z]     15360.00 IOPS,    60.00 MiB/s
[2024-12-14T12:36:24.936Z]     15416.00 IOPS,    60.22 MiB/s
[2024-12-14T12:36:25.869Z]     15520.33 IOPS,    60.63 MiB/s
[2024-12-14T12:36:27.244Z]     15506.29 IOPS,    60.57 MiB/s
[2024-12-14T12:36:28.178Z]     15539.00 IOPS,    60.70 MiB/s
[2024-12-14T12:36:29.113Z]     15581.22 IOPS,    60.86 MiB/s
[2024-12-14T12:36:29.113Z]     15564.80 IOPS,    60.80 MiB/s
00:10:29.375                                                                                                  Latency(us)
00:10:29.375  
[2024-12-14T12:36:29.113Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:10:29.375  Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096)
00:10:29.375  	 Verification LBA range: start 0x0 length 0x4000
00:10:29.375  	 NVMe0n1             :      10.05   15596.07      60.92       0.00     0.00   65481.00   25899.83   42991.62
00:10:29.375  
[2024-12-14T12:36:29.113Z]  ===================================================================================================================
00:10:29.375  
[2024-12-14T12:36:29.113Z]  Total                       :              15596.07      60.92       0.00     0.00   65481.00   25899.83   42991.62
00:10:29.375  {
00:10:29.375    "results": [
00:10:29.375      {
00:10:29.375        "job": "NVMe0n1",
00:10:29.375        "core_mask": "0x1",
00:10:29.375        "workload": "verify",
00:10:29.375        "status": "finished",
00:10:29.375        "verify_range": {
00:10:29.375          "start": 0,
00:10:29.375          "length": 16384
00:10:29.375        },
00:10:29.375        "queue_depth": 1024,
00:10:29.375        "io_size": 4096,
00:10:29.375        "runtime": 10.045607,
00:10:29.375        "iops": 15596.070998994885,
00:10:29.375        "mibps": 60.92215233982377,
00:10:29.375        "io_failed": 0,
00:10:29.375        "io_timeout": 0,
00:10:29.375        "avg_latency_us": 65481.00316862745,
00:10:29.375        "min_latency_us": 25899.8272,
00:10:29.375        "max_latency_us": 42991.616
00:10:29.375      }
00:10:29.375    ],
00:10:29.375    "core_count": 1
00:10:29.375  }
00:10:29.375   13:36:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3193683
00:10:29.375   13:36:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3193683 ']'
00:10:29.375   13:36:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3193683
00:10:29.375    13:36:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname
00:10:29.375   13:36:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:10:29.375    13:36:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3193683
00:10:29.375   13:36:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:10:29.375   13:36:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:10:29.375   13:36:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3193683'
00:10:29.375  killing process with pid 3193683
00:10:29.375   13:36:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3193683
00:10:29.375  Received shutdown signal, test time was about 10.000000 seconds
00:10:29.375  
00:10:29.375                                                                                                  Latency(us)
00:10:29.375  
[2024-12-14T12:36:29.113Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:10:29.375  
[2024-12-14T12:36:29.113Z]  ===================================================================================================================
00:10:29.375  
[2024-12-14T12:36:29.113Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:10:29.375   13:36:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3193683
00:10:30.311   13:36:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT
00:10:30.311   13:36:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini
00:10:30.311   13:36:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup
00:10:30.311   13:36:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync
00:10:30.311   13:36:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' rdma == tcp ']'
00:10:30.311   13:36:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' rdma == rdma ']'
00:10:30.311   13:36:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e
00:10:30.311   13:36:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20}
00:10:30.311   13:36:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma
00:10:30.311  rmmod nvme_rdma
00:10:30.311  rmmod nvme_fabrics
00:10:30.311   13:36:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:10:30.311   13:36:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e
00:10:30.311   13:36:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0
00:10:30.311   13:36:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3193607 ']'
00:10:30.311   13:36:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3193607
00:10:30.311   13:36:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3193607 ']'
00:10:30.311   13:36:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3193607
00:10:30.311    13:36:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname
00:10:30.311   13:36:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:10:30.311    13:36:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3193607
00:10:30.311   13:36:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:10:30.311   13:36:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:10:30.311   13:36:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3193607'
00:10:30.311  killing process with pid 3193607
00:10:30.311   13:36:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3193607
00:10:30.311   13:36:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3193607
00:10:31.690   13:36:31 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:10:31.690   13:36:31 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]]
00:10:31.690  
00:10:31.690  real	0m22.106s
00:10:31.690  user	0m28.873s
00:10:31.690  sys	0m6.466s
00:10:31.690   13:36:31 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:31.690   13:36:31 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:10:31.690  ************************************
00:10:31.690  END TEST nvmf_queue_depth
00:10:31.690  ************************************
00:10:31.999   13:36:31 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma
00:10:31.999   13:36:31 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:10:31.999   13:36:31 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:31.999   13:36:31 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x
00:10:31.999  ************************************
00:10:31.999  START TEST nvmf_target_multipath
00:10:31.999  ************************************
00:10:31.999   13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma
00:10:31.999  * Looking for test storage...
00:10:31.999  * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target
00:10:31.999    13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:10:31.999     13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version
00:10:31.999     13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:10:31.999    13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:10:31.999    13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:10:31.999    13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l
00:10:31.999    13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l
00:10:31.999    13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-:
00:10:31.999    13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1
00:10:31.999    13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-:
00:10:31.999    13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2
00:10:31.999    13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<'
00:10:31.999    13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2
00:10:31.999    13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1
00:10:31.999    13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:10:31.999    13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in
00:10:31.999    13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1
00:10:31.999    13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 ))
00:10:31.999    13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:10:31.999     13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1
00:10:31.999     13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1
00:10:31.999     13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:31.999     13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1
00:10:31.999    13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1
00:10:31.999     13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2
00:10:31.999     13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2
00:10:31.999     13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:10:31.999     13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2
00:10:31.999    13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2
00:10:31.999    13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:10:31.999    13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:10:31.999    13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0
00:10:31.999    13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:10:31.999    13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:10:31.999  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:31.999  		--rc genhtml_branch_coverage=1
00:10:31.999  		--rc genhtml_function_coverage=1
00:10:31.999  		--rc genhtml_legend=1
00:10:31.999  		--rc geninfo_all_blocks=1
00:10:31.999  		--rc geninfo_unexecuted_blocks=1
00:10:31.999  		
00:10:31.999  		'
00:10:31.999    13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:10:31.999  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:31.999  		--rc genhtml_branch_coverage=1
00:10:31.999  		--rc genhtml_function_coverage=1
00:10:31.999  		--rc genhtml_legend=1
00:10:31.999  		--rc geninfo_all_blocks=1
00:10:31.999  		--rc geninfo_unexecuted_blocks=1
00:10:31.999  		
00:10:31.999  		'
00:10:31.999    13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:10:31.999  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:31.999  		--rc genhtml_branch_coverage=1
00:10:31.999  		--rc genhtml_function_coverage=1
00:10:31.999  		--rc genhtml_legend=1
00:10:31.999  		--rc geninfo_all_blocks=1
00:10:31.999  		--rc geninfo_unexecuted_blocks=1
00:10:31.999  		
00:10:31.999  		'
00:10:31.999    13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:10:31.999  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:31.999  		--rc genhtml_branch_coverage=1
00:10:31.999  		--rc genhtml_function_coverage=1
00:10:31.999  		--rc genhtml_legend=1
00:10:31.999  		--rc geninfo_all_blocks=1
00:10:31.999  		--rc geninfo_unexecuted_blocks=1
00:10:31.999  		
00:10:31.999  		'
00:10:31.999   13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh
00:10:31.999     13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s
00:10:31.999    13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:10:31.999    13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:10:31.999    13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:10:31.999    13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:10:31.999    13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:10:31.999    13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:10:31.999    13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:10:31.999    13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:10:31.999    13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:10:31.999     13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:10:31.999    13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:10:31.999    13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e
00:10:31.999    13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:10:31.999    13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:10:31.999    13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:10:31.999    13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:10:31.999    13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh
00:10:31.999     13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob
00:10:31.999     13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:10:31.999     13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:10:31.999     13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:10:31.999      13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:31.999      13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:31.999      13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:31.999      13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH
00:10:31.999      13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:31.999    13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0
00:10:31.999    13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:10:31.999    13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:10:32.000    13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:10:32.000    13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:10:32.000    13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:10:32.000    13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:10:32.000  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:10:32.000    13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:10:32.000    13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:10:32.000    13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0
00:10:32.000   13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64
00:10:32.000   13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:10:32.000   13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1
00:10:32.000   13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py
00:10:32.000   13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit
00:10:32.000   13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z rdma ']'
00:10:32.000   13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:10:32.000   13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs
00:10:32.000   13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no
00:10:32.000   13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns
00:10:32.000   13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:10:32.000   13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:10:32.000    13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:10:32.000   13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:10:32.000   13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:10:32.000   13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable
00:10:32.000   13:36:31 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x
00:10:40.118   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:10:40.118   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=()
00:10:40.118   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs
00:10:40.118   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=()
00:10:40.118   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:10:40.118   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=()
00:10:40.118   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers
00:10:40.118   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=()
00:10:40.118   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs
00:10:40.118   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=()
00:10:40.118   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810
00:10:40.118   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=()
00:10:40.118   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722
00:10:40.118   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=()
00:10:40.118   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx
00:10:40.118   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:10:40.118   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:10:40.118   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ rdma == rdma ]]
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}")
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}")
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]]
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}")
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)'
00:10:40.119  Found 0000:d9:00.0 (0x15b3 - 0x1015)
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)'
00:10:40.119  Found 0000:d9:00.1 (0x15b3 - 0x1015)
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]]
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0'
00:10:40.119  Found net devices under 0000:d9:00.0: mlx_0_0
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1'
00:10:40.119  Found net devices under 0000:d9:00.1: mlx_0_1
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ rdma == tcp ]]
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@447 -- # [[ rdma == rdma ]]
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # rdma_device_init
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@529 -- # load_ib_rdma_modules
00:10:40.119    13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@62 -- # uname
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']'
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@66 -- # modprobe ib_cm
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@67 -- # modprobe ib_core
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@68 -- # modprobe ib_umad
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@69 -- # modprobe ib_uverbs
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@70 -- # modprobe iw_cm
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@71 -- # modprobe rdma_cm
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@72 -- # modprobe rdma_ucm
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@530 -- # allocate_nic_ips
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR ))
00:10:40.119    13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # get_rdma_if_list
00:10:40.119    13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:10:40.119    13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:10:40.119     13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:10:40.119     13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:10:40.119    13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:10:40.119    13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:10:40.119    13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:10:40.119    13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:10:40.119    13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_0
00:10:40.119    13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2
00:10:40.119    13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:10:40.119    13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:10:40.119    13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:10:40.119    13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:10:40.119    13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:10:40.119    13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_1
00:10:40.119    13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:10:40.119    13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0
00:10:40.119    13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:10:40.119    13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:10:40.119    13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1
00:10:40.119    13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}'
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # ip=192.168.100.8
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]]
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@85 -- # ip addr show mlx_0_0
00:10:40.119  6: mlx_0_0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:10:40.119      link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff
00:10:40.119      altname enp217s0f0np0
00:10:40.119      altname ens818f0np0
00:10:40.119      inet 192.168.100.8/24 scope global mlx_0_0
00:10:40.119         valid_lft forever preferred_lft forever
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:10:40.119    13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1
00:10:40.119    13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:10:40.119    13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:10:40.119    13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}'
00:10:40.119    13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1
00:10:40.119   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # ip=192.168.100.9
00:10:40.120   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]]
00:10:40.120   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@85 -- # ip addr show mlx_0_1
00:10:40.120  7: mlx_0_1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:10:40.120      link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff
00:10:40.120      altname enp217s0f1np1
00:10:40.120      altname ens818f1np1
00:10:40.120      inet 192.168.100.9/24 scope global mlx_0_1
00:10:40.120         valid_lft forever preferred_lft forever
00:10:40.120   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0
00:10:40.120   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:10:40.120   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma'
00:10:40.120   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]]
00:10:40.120    13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # get_available_rdma_ips
00:10:40.120     13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # get_rdma_if_list
00:10:40.120     13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:10:40.120     13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:10:40.120      13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:10:40.120      13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:10:40.120     13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:10:40.120     13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:10:40.120     13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:10:40.120     13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:10:40.120     13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_0
00:10:40.120     13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2
00:10:40.120     13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:10:40.120     13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:10:40.120     13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:10:40.120     13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:10:40.120     13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:10:40.120     13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_1
00:10:40.120     13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2
00:10:40.120    13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:10:40.120    13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0
00:10:40.120    13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:10:40.120    13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:10:40.120    13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}'
00:10:40.120    13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1
00:10:40.120    13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:10:40.120    13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1
00:10:40.120    13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:10:40.120    13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:10:40.120    13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}'
00:10:40.120    13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1
00:10:40.120   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8
00:10:40.120  192.168.100.9'
00:10:40.120    13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # echo '192.168.100.8
00:10:40.120  192.168.100.9'
00:10:40.120    13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # head -n 1
00:10:40.120   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8
00:10:40.120    13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # echo '192.168.100.8
00:10:40.120  192.168.100.9'
00:10:40.120    13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # tail -n +2
00:10:40.120    13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # head -n 1
00:10:40.120   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9
00:10:40.120   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']'
00:10:40.120   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024'
00:10:40.120   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' rdma == tcp ']'
00:10:40.120   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' rdma == rdma ']'
00:10:40.120   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-rdma
00:10:40.120   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']'
00:10:40.120   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']'
00:10:40.120   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now'
00:10:40.120  run this test only with TCP transport for now
00:10:40.120   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@53 -- # nvmftestfini
00:10:40.120   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup
00:10:40.120   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync
00:10:40.120   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == tcp ']'
00:10:40.120   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == rdma ']'
00:10:40.120   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e
00:10:40.120   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20}
00:10:40.120   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma
00:10:40.120  rmmod nvme_rdma
00:10:40.120  rmmod nvme_fabrics
00:10:40.120   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:10:40.120   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e
00:10:40.120   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0
00:10:40.120   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']'
00:10:40.120   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:10:40.120   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]]
00:10:40.120   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@54 -- # exit 0
00:10:40.120   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini
00:10:40.120   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup
00:10:40.120   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync
00:10:40.120   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == tcp ']'
00:10:40.120   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == rdma ']'
00:10:40.120   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e
00:10:40.120   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20}
00:10:40.120   13:36:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma
00:10:40.120   13:36:39 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:10:40.120   13:36:39 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e
00:10:40.120   13:36:39 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0
00:10:40.120   13:36:39 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']'
00:10:40.120   13:36:39 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:10:40.120   13:36:39 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]]
00:10:40.120  
00:10:40.120  real	0m7.536s
00:10:40.120  user	0m2.133s
00:10:40.120  sys	0m5.605s
00:10:40.120   13:36:39 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:40.120   13:36:39 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x
00:10:40.120  ************************************
00:10:40.120  END TEST nvmf_target_multipath
00:10:40.120  ************************************
00:10:40.120   13:36:39 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma
00:10:40.120   13:36:39 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:10:40.120   13:36:39 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:40.120   13:36:39 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x
00:10:40.120  ************************************
00:10:40.120  START TEST nvmf_zcopy
00:10:40.120  ************************************
00:10:40.120   13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma
00:10:40.120  * Looking for test storage...
00:10:40.120  * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target
00:10:40.120    13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:10:40.120     13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version
00:10:40.120     13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:10:40.120    13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:10:40.120    13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:10:40.120    13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l
00:10:40.120    13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l
00:10:40.120    13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-:
00:10:40.121    13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1
00:10:40.121    13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-:
00:10:40.121    13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2
00:10:40.121    13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<'
00:10:40.121    13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2
00:10:40.121    13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1
00:10:40.121    13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:10:40.121    13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in
00:10:40.121    13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1
00:10:40.121    13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 ))
00:10:40.121    13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:10:40.121     13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1
00:10:40.121     13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1
00:10:40.121     13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:40.121     13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1
00:10:40.121    13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1
00:10:40.121     13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2
00:10:40.121     13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2
00:10:40.121     13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:10:40.121     13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2
00:10:40.121    13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2
00:10:40.121    13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:10:40.121    13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:10:40.121    13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0
00:10:40.121    13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:10:40.121    13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:10:40.121  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:40.121  		--rc genhtml_branch_coverage=1
00:10:40.121  		--rc genhtml_function_coverage=1
00:10:40.121  		--rc genhtml_legend=1
00:10:40.121  		--rc geninfo_all_blocks=1
00:10:40.121  		--rc geninfo_unexecuted_blocks=1
00:10:40.121  		
00:10:40.121  		'
00:10:40.121    13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:10:40.121  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:40.121  		--rc genhtml_branch_coverage=1
00:10:40.121  		--rc genhtml_function_coverage=1
00:10:40.121  		--rc genhtml_legend=1
00:10:40.121  		--rc geninfo_all_blocks=1
00:10:40.121  		--rc geninfo_unexecuted_blocks=1
00:10:40.121  		
00:10:40.121  		'
00:10:40.121    13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:10:40.121  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:40.121  		--rc genhtml_branch_coverage=1
00:10:40.121  		--rc genhtml_function_coverage=1
00:10:40.121  		--rc genhtml_legend=1
00:10:40.121  		--rc geninfo_all_blocks=1
00:10:40.121  		--rc geninfo_unexecuted_blocks=1
00:10:40.121  		
00:10:40.121  		'
00:10:40.121    13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:10:40.121  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:40.121  		--rc genhtml_branch_coverage=1
00:10:40.121  		--rc genhtml_function_coverage=1
00:10:40.121  		--rc genhtml_legend=1
00:10:40.121  		--rc geninfo_all_blocks=1
00:10:40.121  		--rc geninfo_unexecuted_blocks=1
00:10:40.121  		
00:10:40.121  		'
00:10:40.121   13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh
00:10:40.121     13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s
00:10:40.121    13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:10:40.121    13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:10:40.121    13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:10:40.121    13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:10:40.121    13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:10:40.121    13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:10:40.121    13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:10:40.121    13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:10:40.121    13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:10:40.121     13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:10:40.121    13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:10:40.121    13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e
00:10:40.121    13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:10:40.121    13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:10:40.121    13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:10:40.121    13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:10:40.121    13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh
00:10:40.121     13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob
00:10:40.121     13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:10:40.121     13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:10:40.121     13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:10:40.121      13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:40.121      13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:40.121      13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:40.121      13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH
00:10:40.121      13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:40.121    13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0
00:10:40.121    13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:10:40.121    13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:10:40.121    13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:10:40.121    13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:10:40.121    13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:10:40.121    13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:10:40.121  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:10:40.121    13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:10:40.121    13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:10:40.121    13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0
00:10:40.121   13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit
00:10:40.121   13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z rdma ']'
00:10:40.121   13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:10:40.121   13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs
00:10:40.121   13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no
00:10:40.121   13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns
00:10:40.121   13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:10:40.121   13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:10:40.121    13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:10:40.121   13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:10:40.121   13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:10:40.121   13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable
00:10:40.121   13:36:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:10:46.684   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:10:46.684   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=()
00:10:46.684   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs
00:10:46.684   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=()
00:10:46.684   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:10:46.684   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=()
00:10:46.684   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers
00:10:46.684   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=()
00:10:46.684   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs
00:10:46.684   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=()
00:10:46.684   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810
00:10:46.684   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=()
00:10:46.684   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722
00:10:46.684   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=()
00:10:46.684   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx
00:10:46.684   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:10:46.684   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:10:46.684   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:10:46.684   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:10:46.684   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:10:46.684   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:10:46.684   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:10:46.684   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:10:46.684   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:10:46.684   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:10:46.684   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:10:46.684   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:10:46.684   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:10:46.684   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ rdma == rdma ]]
00:10:46.684   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}")
00:10:46.684   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}")
00:10:46.684   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]]
00:10:46.684   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}")
00:10:46.684   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:10:46.684   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:10:46.684   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)'
00:10:46.684  Found 0000:d9:00.0 (0x15b3 - 0x1015)
00:10:46.684   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:10:46.684   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:10:46.684   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:10:46.684   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:10:46.684   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:10:46.684   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:10:46.684   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:10:46.684   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)'
00:10:46.684  Found 0000:d9:00.1 (0x15b3 - 0x1015)
00:10:46.684   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:10:46.684   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:10:46.684   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:10:46.684   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:10:46.684   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:10:46.684   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:10:46.684   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:10:46.684   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]]
00:10:46.685   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:10:46.685   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:10:46.685   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:10:46.685   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:10:46.685   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:10:46.685   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0'
00:10:46.685  Found net devices under 0000:d9:00.0: mlx_0_0
00:10:46.685   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:10:46.685   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:10:46.685   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:10:46.685   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:10:46.685   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:10:46.685   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:10:46.685   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1'
00:10:46.685  Found net devices under 0000:d9:00.1: mlx_0_1
00:10:46.685   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:10:46.685   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:10:46.685   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes
00:10:46.685   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:10:46.685   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ rdma == tcp ]]
00:10:46.685   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@447 -- # [[ rdma == rdma ]]
00:10:46.685   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # rdma_device_init
00:10:46.685   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@529 -- # load_ib_rdma_modules
00:10:46.685    13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@62 -- # uname
00:10:46.685   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']'
00:10:46.685   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@66 -- # modprobe ib_cm
00:10:46.685   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@67 -- # modprobe ib_core
00:10:46.685   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@68 -- # modprobe ib_umad
00:10:46.685   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@69 -- # modprobe ib_uverbs
00:10:46.685   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@70 -- # modprobe iw_cm
00:10:46.685   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@71 -- # modprobe rdma_cm
00:10:46.685   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@72 -- # modprobe rdma_ucm
00:10:46.685   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@530 -- # allocate_nic_ips
00:10:46.685   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR ))
00:10:46.685    13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # get_rdma_if_list
00:10:46.685    13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:10:46.685    13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:10:46.685     13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:10:46.685     13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:10:46.685    13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:10:46.685    13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:10:46.685    13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:10:46.685    13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:10:46.685    13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_0
00:10:46.685    13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2
00:10:46.685    13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:10:46.685    13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:10:46.685    13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:10:46.685    13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:10:46.685    13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:10:46.685    13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_1
00:10:46.685    13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2
00:10:46.685   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:10:46.685    13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0
00:10:46.685    13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:10:46.685    13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:10:46.685    13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}'
00:10:46.685    13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1
00:10:46.685   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # ip=192.168.100.8
00:10:46.685   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]]
00:10:46.685   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@85 -- # ip addr show mlx_0_0
00:10:46.685  6: mlx_0_0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:10:46.685      link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff
00:10:46.685      altname enp217s0f0np0
00:10:46.685      altname ens818f0np0
00:10:46.685      inet 192.168.100.8/24 scope global mlx_0_0
00:10:46.685         valid_lft forever preferred_lft forever
00:10:46.685   13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:10:46.685    13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1
00:10:46.685    13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:10:46.685    13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:10:46.685    13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}'
00:10:46.685    13:36:45 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1
00:10:46.685   13:36:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # ip=192.168.100.9
00:10:46.685   13:36:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]]
00:10:46.685   13:36:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@85 -- # ip addr show mlx_0_1
00:10:46.685  7: mlx_0_1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:10:46.685      link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff
00:10:46.685      altname enp217s0f1np1
00:10:46.685      altname ens818f1np1
00:10:46.685      inet 192.168.100.9/24 scope global mlx_0_1
00:10:46.685         valid_lft forever preferred_lft forever
00:10:46.685   13:36:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0
00:10:46.685   13:36:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:10:46.685   13:36:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma'
00:10:46.685   13:36:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]]
00:10:46.685    13:36:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # get_available_rdma_ips
00:10:46.685     13:36:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # get_rdma_if_list
00:10:46.685     13:36:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:10:46.685     13:36:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:10:46.685      13:36:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:10:46.685      13:36:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:10:46.685     13:36:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:10:46.685     13:36:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:10:46.685     13:36:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:10:46.685     13:36:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:10:46.685     13:36:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_0
00:10:46.685     13:36:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2
00:10:46.685     13:36:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:10:46.685     13:36:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:10:46.685     13:36:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:10:46.685     13:36:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:10:46.685     13:36:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:10:46.685     13:36:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_1
00:10:46.685     13:36:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2
00:10:46.685    13:36:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:10:46.685    13:36:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0
00:10:46.685    13:36:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:10:46.685    13:36:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:10:46.685    13:36:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}'
00:10:46.685    13:36:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1
00:10:46.685    13:36:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:10:46.685    13:36:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1
00:10:46.685    13:36:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:10:46.685    13:36:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:10:46.685    13:36:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}'
00:10:46.685    13:36:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1
00:10:46.685   13:36:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8
00:10:46.685  192.168.100.9'
00:10:46.685    13:36:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # echo '192.168.100.8
00:10:46.685  192.168.100.9'
00:10:46.686    13:36:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # head -n 1
00:10:46.686   13:36:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8
00:10:46.686    13:36:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # echo '192.168.100.8
00:10:46.686  192.168.100.9'
00:10:46.686    13:36:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # tail -n +2
00:10:46.686    13:36:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # head -n 1
00:10:46.686   13:36:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9
00:10:46.686   13:36:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']'
00:10:46.686   13:36:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024'
00:10:46.686   13:36:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' rdma == tcp ']'
00:10:46.686   13:36:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' rdma == rdma ']'
00:10:46.686   13:36:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-rdma
00:10:46.686   13:36:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2
00:10:46.686   13:36:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:10:46.686   13:36:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable
00:10:46.686   13:36:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:10:46.686   13:36:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3202892
00:10:46.686   13:36:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2
00:10:46.686   13:36:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3202892
00:10:46.686   13:36:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 3202892 ']'
00:10:46.686   13:36:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:10:46.686   13:36:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:46.686   13:36:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:10:46.686  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:10:46.686   13:36:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:46.686   13:36:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:10:46.686  [2024-12-14 13:36:46.216843] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:10:46.686  [2024-12-14 13:36:46.216943] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:10:46.686  [2024-12-14 13:36:46.346386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:46.945  [2024-12-14 13:36:46.442143] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:10:46.945  [2024-12-14 13:36:46.442196] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:10:46.945  [2024-12-14 13:36:46.442208] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:10:46.945  [2024-12-14 13:36:46.442221] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:10:46.945  [2024-12-14 13:36:46.442230] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:10:46.945  [2024-12-14 13:36:46.443478] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:10:47.528   13:36:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:47.528   13:36:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0
00:10:47.528   13:36:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:10:47.528   13:36:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable
00:10:47.528   13:36:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:10:47.528   13:36:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:10:47.528   13:36:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']'
00:10:47.528   13:36:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma'
00:10:47.528  Unsupported transport: rdma
00:10:47.528   13:36:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@17 -- # exit 0
00:10:47.528   13:36:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@1 -- # process_shm --id 0
00:10:47.528   13:36:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@812 -- # type=--id
00:10:47.528   13:36:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@813 -- # id=0
00:10:47.528   13:36:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@814 -- # '[' --id = --pid ']'
00:10:47.528    13:36:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n'
00:10:47.528   13:36:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0
00:10:47.528   13:36:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]]
00:10:47.528   13:36:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@824 -- # for n in $shm_files
00:10:47.528   13:36:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0
00:10:47.528  nvmf_trace.0
00:10:47.528   13:36:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@827 -- # return 0
00:10:47.528   13:36:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@1 -- # nvmftestfini
00:10:47.528   13:36:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup
00:10:47.528   13:36:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync
00:10:47.528   13:36:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' rdma == tcp ']'
00:10:47.528   13:36:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' rdma == rdma ']'
00:10:47.528   13:36:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e
00:10:47.528   13:36:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20}
00:10:47.528   13:36:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma
00:10:47.528  rmmod nvme_rdma
00:10:47.528  rmmod nvme_fabrics
00:10:47.528   13:36:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:10:47.528   13:36:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e
00:10:47.528   13:36:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0
00:10:47.528   13:36:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3202892 ']'
00:10:47.528   13:36:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3202892
00:10:47.528   13:36:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 3202892 ']'
00:10:47.528   13:36:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 3202892
00:10:47.528    13:36:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname
00:10:47.528   13:36:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:10:47.528    13:36:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3202892
00:10:47.528   13:36:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:10:47.528   13:36:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:10:47.528   13:36:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3202892'
00:10:47.528  killing process with pid 3202892
00:10:47.528   13:36:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 3202892
00:10:47.528   13:36:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 3202892
00:10:48.918   13:36:48 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:10:48.918   13:36:48 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]]
00:10:48.918  
00:10:48.918  real	0m9.170s
00:10:48.918  user	0m4.333s
00:10:48.918  sys	0m5.629s
00:10:48.918   13:36:48 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:48.918   13:36:48 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:10:48.918  ************************************
00:10:48.918  END TEST nvmf_zcopy
00:10:48.918  ************************************
00:10:48.918   13:36:48 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma
00:10:48.918   13:36:48 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:10:48.918   13:36:48 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:48.918   13:36:48 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x
00:10:48.918  ************************************
00:10:48.918  START TEST nvmf_nmic
00:10:48.918  ************************************
00:10:48.918   13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma
00:10:48.918  * Looking for test storage...
00:10:48.918  * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target
00:10:48.918    13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:10:48.918     13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version
00:10:48.918     13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:10:48.918    13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:10:48.918    13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:10:48.918    13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l
00:10:48.918    13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l
00:10:48.918    13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-:
00:10:48.918    13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1
00:10:48.918    13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-:
00:10:48.918    13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2
00:10:48.918    13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<'
00:10:48.918    13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2
00:10:48.918    13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1
00:10:48.918    13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:10:48.918    13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in
00:10:48.918    13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1
00:10:48.918    13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 ))
00:10:48.918    13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:10:48.918     13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1
00:10:48.918     13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1
00:10:48.918     13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:48.918     13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1
00:10:48.918    13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1
00:10:48.918     13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2
00:10:48.918     13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2
00:10:48.918     13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:10:48.918     13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2
00:10:48.918    13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2
00:10:48.918    13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:10:48.918    13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:10:48.918    13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0
00:10:48.918    13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:10:48.918    13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:10:48.918  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:48.918  		--rc genhtml_branch_coverage=1
00:10:48.918  		--rc genhtml_function_coverage=1
00:10:48.918  		--rc genhtml_legend=1
00:10:48.919  		--rc geninfo_all_blocks=1
00:10:48.919  		--rc geninfo_unexecuted_blocks=1
00:10:48.919  		
00:10:48.919  		'
00:10:48.919    13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:10:48.919  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:48.919  		--rc genhtml_branch_coverage=1
00:10:48.919  		--rc genhtml_function_coverage=1
00:10:48.919  		--rc genhtml_legend=1
00:10:48.919  		--rc geninfo_all_blocks=1
00:10:48.919  		--rc geninfo_unexecuted_blocks=1
00:10:48.919  		
00:10:48.919  		'
00:10:48.919    13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:10:48.919  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:48.919  		--rc genhtml_branch_coverage=1
00:10:48.919  		--rc genhtml_function_coverage=1
00:10:48.919  		--rc genhtml_legend=1
00:10:48.919  		--rc geninfo_all_blocks=1
00:10:48.919  		--rc geninfo_unexecuted_blocks=1
00:10:48.919  		
00:10:48.919  		'
00:10:48.919    13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:10:48.919  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:48.919  		--rc genhtml_branch_coverage=1
00:10:48.919  		--rc genhtml_function_coverage=1
00:10:48.919  		--rc genhtml_legend=1
00:10:48.919  		--rc geninfo_all_blocks=1
00:10:48.919  		--rc geninfo_unexecuted_blocks=1
00:10:48.919  		
00:10:48.919  		'
00:10:48.919   13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh
00:10:48.919     13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s
00:10:48.919    13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:10:48.919    13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:10:48.919    13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:10:48.919    13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:10:48.919    13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:10:48.919    13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:10:48.919    13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:10:48.919    13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:10:48.919    13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:10:48.919     13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:10:48.919    13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:10:48.919    13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e
00:10:48.919    13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:10:48.919    13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:10:48.919    13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:10:48.919    13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:10:48.919    13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh
00:10:48.919     13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob
00:10:48.919     13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:10:48.919     13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:10:48.919     13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:10:48.919      13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:48.919      13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:48.919      13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:48.919      13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH
00:10:48.919      13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:48.919    13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0
00:10:48.919    13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:10:48.919    13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:10:48.919    13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:10:48.919    13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:10:48.919    13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:10:48.919    13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:10:48.919  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:10:48.919    13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:10:48.919    13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:10:48.919    13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0
00:10:48.919   13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64
00:10:48.919   13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:10:48.919   13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit
00:10:48.919   13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z rdma ']'
00:10:48.919   13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:10:48.919   13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs
00:10:48.919   13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no
00:10:48.919   13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns
00:10:48.919   13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:10:48.919   13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:10:48.919    13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:10:48.919   13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:10:48.919   13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:10:48.919   13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable
00:10:48.919   13:36:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=()
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=()
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=()
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=()
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=()
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=()
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=()
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ rdma == rdma ]]
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}")
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}")
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]]
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}")
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)'
00:10:55.485  Found 0000:d9:00.0 (0x15b3 - 0x1015)
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)'
00:10:55.485  Found 0000:d9:00.1 (0x15b3 - 0x1015)
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]]
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0'
00:10:55.485  Found net devices under 0000:d9:00.0: mlx_0_0
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1'
00:10:55.485  Found net devices under 0000:d9:00.1: mlx_0_1
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ rdma == tcp ]]
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@447 -- # [[ rdma == rdma ]]
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # rdma_device_init
00:10:55.485   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@529 -- # load_ib_rdma_modules
00:10:55.745    13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@62 -- # uname
00:10:55.745   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']'
00:10:55.745   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@66 -- # modprobe ib_cm
00:10:55.745   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@67 -- # modprobe ib_core
00:10:55.745   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@68 -- # modprobe ib_umad
00:10:55.745   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@69 -- # modprobe ib_uverbs
00:10:55.745   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@70 -- # modprobe iw_cm
00:10:55.745   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@71 -- # modprobe rdma_cm
00:10:55.745   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@72 -- # modprobe rdma_ucm
00:10:55.745   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@530 -- # allocate_nic_ips
00:10:55.745   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR ))
00:10:55.745    13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # get_rdma_if_list
00:10:55.745    13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:10:55.745    13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:10:55.745     13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:10:55.745     13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:10:55.745    13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:10:55.745    13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:10:55.745    13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:10:55.745    13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:10:55.745    13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_0
00:10:55.745    13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2
00:10:55.745    13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:10:55.745    13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:10:55.745    13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:10:55.745    13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:10:55.745    13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:10:55.745    13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_1
00:10:55.745    13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2
00:10:55.745   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:10:55.745    13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0
00:10:55.745    13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:10:55.745    13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:10:55.745    13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}'
00:10:55.745    13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1
00:10:55.745   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # ip=192.168.100.8
00:10:55.745   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]]
00:10:55.745   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@85 -- # ip addr show mlx_0_0
00:10:55.745  6: mlx_0_0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:10:55.745      link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff
00:10:55.745      altname enp217s0f0np0
00:10:55.745      altname ens818f0np0
00:10:55.745      inet 192.168.100.8/24 scope global mlx_0_0
00:10:55.745         valid_lft forever preferred_lft forever
00:10:55.745   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:10:55.745    13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1
00:10:55.745    13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:10:55.745    13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:10:55.745    13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}'
00:10:55.745    13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1
00:10:55.745   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # ip=192.168.100.9
00:10:55.745   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]]
00:10:55.745   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@85 -- # ip addr show mlx_0_1
00:10:55.745  7: mlx_0_1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:10:55.745      link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff
00:10:55.745      altname enp217s0f1np1
00:10:55.745      altname ens818f1np1
00:10:55.745      inet 192.168.100.9/24 scope global mlx_0_1
00:10:55.745         valid_lft forever preferred_lft forever
00:10:55.745   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0
00:10:55.745   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:10:55.745   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma'
00:10:55.745   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]]
00:10:55.745    13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # get_available_rdma_ips
00:10:55.745     13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # get_rdma_if_list
00:10:55.745     13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:10:55.745     13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:10:55.745      13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:10:55.745      13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:10:55.745     13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:10:55.745     13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:10:55.745     13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:10:55.745     13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:10:55.745     13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_0
00:10:55.745     13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2
00:10:55.745     13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:10:55.745     13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:10:55.745     13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:10:55.745     13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:10:55.745     13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:10:55.745     13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_1
00:10:55.745     13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2
00:10:55.745    13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:10:55.745    13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0
00:10:55.745    13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:10:55.745    13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:10:55.745    13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}'
00:10:55.745    13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1
00:10:55.745    13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:10:55.745    13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1
00:10:55.745    13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:10:55.745    13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:10:55.745    13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}'
00:10:55.745    13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1
00:10:55.745   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8
00:10:55.745  192.168.100.9'
00:10:55.745    13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # echo '192.168.100.8
00:10:55.745  192.168.100.9'
00:10:55.745    13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # head -n 1
00:10:55.745   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8
00:10:55.745    13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # echo '192.168.100.8
00:10:55.745  192.168.100.9'
00:10:55.745    13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # head -n 1
00:10:55.745    13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # tail -n +2
00:10:55.745   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9
00:10:55.745   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']'
00:10:55.745   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024'
00:10:55.745   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' rdma == tcp ']'
00:10:55.745   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' rdma == rdma ']'
00:10:55.745   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-rdma
00:10:55.745   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF
00:10:55.745   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:10:55.745   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable
00:10:55.745   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:10:55.745   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3206612
00:10:55.745   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:10:55.745   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3206612
00:10:55.745   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 3206612 ']'
00:10:55.746   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:10:55.746   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:56.004   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:10:56.004  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:10:56.004   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:56.004   13:36:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:10:56.004  [2024-12-14 13:36:55.570752] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:10:56.004  [2024-12-14 13:36:55.570847] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:10:56.004  [2024-12-14 13:36:55.702328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:10:56.263  [2024-12-14 13:36:55.801931] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:10:56.263  [2024-12-14 13:36:55.801986] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:10:56.263  [2024-12-14 13:36:55.802003] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:10:56.263  [2024-12-14 13:36:55.802019] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:10:56.263  [2024-12-14 13:36:55.802032] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:10:56.263  [2024-12-14 13:36:55.804500] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:10:56.263  [2024-12-14 13:36:55.804574] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:10:56.263  [2024-12-14 13:36:55.804637] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:10:56.263  [2024-12-14 13:36:55.804642] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:10:56.829   13:36:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:56.829   13:36:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0
00:10:56.829   13:36:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:10:56.829   13:36:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable
00:10:56.829   13:36:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:10:56.829   13:36:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:10:56.829   13:36:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192
00:10:56.829   13:36:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:56.829   13:36:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:10:56.829  [2024-12-14 13:36:56.468237] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028540/0x7fe3a9353940) succeed.
00:10:56.829  [2024-12-14 13:36:56.478259] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000286c0/0x7fe3a930f940) succeed.
00:10:57.088   13:36:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:57.088   13:36:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0
00:10:57.088   13:36:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:57.088   13:36:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:10:57.088  Malloc0
00:10:57.088   13:36:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:57.088   13:36:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME
00:10:57.088   13:36:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:57.088   13:36:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:10:57.088   13:36:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:57.088   13:36:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:10:57.088   13:36:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:57.088   13:36:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:10:57.346   13:36:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:57.346   13:36:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420
00:10:57.346   13:36:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:57.346   13:36:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:10:57.346  [2024-12-14 13:36:56.835195] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 ***
00:10:57.346   13:36:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:57.346   13:36:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems'
00:10:57.346  test case1: single bdev can't be used in multiple subsystems
00:10:57.346   13:36:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2
00:10:57.346   13:36:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:57.346   13:36:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:10:57.346   13:36:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:57.346   13:36:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420
00:10:57.346   13:36:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:57.346   13:36:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:10:57.346   13:36:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:57.346   13:36:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0
00:10:57.346   13:36:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0
00:10:57.346   13:36:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:57.346   13:36:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:10:57.346  [2024-12-14 13:36:56.862985] bdev.c:8538:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target
00:10:57.346  [2024-12-14 13:36:56.863023] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1
00:10:57.346  [2024-12-14 13:36:56.863043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:57.346  request:
00:10:57.346  {
00:10:57.346  "nqn": "nqn.2016-06.io.spdk:cnode2",
00:10:57.346  "namespace": {
00:10:57.346  "bdev_name": "Malloc0",
00:10:57.346  "no_auto_visible": false,
00:10:57.346  "hide_metadata": false
00:10:57.346  },
00:10:57.346  "method": "nvmf_subsystem_add_ns",
00:10:57.346  "req_id": 1
00:10:57.346  }
00:10:57.346  Got JSON-RPC error response
00:10:57.346  response:
00:10:57.346  {
00:10:57.346  "code": -32602,
00:10:57.346  "message": "Invalid parameters"
00:10:57.346  }
00:10:57.346   13:36:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:10:57.346   13:36:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1
00:10:57.346   13:36:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']'
00:10:57.346   13:36:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.'
00:10:57.346   Adding namespace failed - expected result.
00:10:57.346   13:36:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths'
00:10:57.346  test case2: host connect to nvmf target in multiple paths
00:10:57.347   13:36:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421
00:10:57.347   13:36:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:57.347   13:36:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:10:57.347  [2024-12-14 13:36:56.879049] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 ***
00:10:57.347   13:36:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:57.347   13:36:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420
00:10:58.281   13:36:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421
00:10:59.215   13:36:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME
00:10:59.215   13:36:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0
00:10:59.215   13:36:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:10:59.215   13:36:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:10:59.215   13:36:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2
00:11:01.744   13:37:00 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:11:01.744    13:37:00 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:11:01.744    13:37:00 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:11:01.744   13:37:00 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:11:01.744   13:37:00 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:11:01.744   13:37:00 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0
00:11:01.744   13:37:00 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v
00:11:01.744  [global]
00:11:01.744  thread=1
00:11:01.744  invalidate=1
00:11:01.744  rw=write
00:11:01.744  time_based=1
00:11:01.744  runtime=1
00:11:01.744  ioengine=libaio
00:11:01.744  direct=1
00:11:01.744  bs=4096
00:11:01.744  iodepth=1
00:11:01.744  norandommap=0
00:11:01.744  numjobs=1
00:11:01.744  
00:11:01.744  verify_dump=1
00:11:01.744  verify_backlog=512
00:11:01.744  verify_state_save=0
00:11:01.744  do_verify=1
00:11:01.744  verify=crc32c-intel
00:11:01.744  [job0]
00:11:01.744  filename=/dev/nvme0n1
00:11:01.744  Could not set queue depth (nvme0n1)
00:11:01.744  job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:11:01.744  fio-3.35
00:11:01.744  Starting 1 thread
00:11:02.679  
00:11:02.679  job0: (groupid=0, jobs=1): err= 0: pid=3207710: Sat Dec 14 13:37:02 2024
00:11:02.679    read: IOPS=6144, BW=24.0MiB/s (25.2MB/s)(24.0MiB/1000msec)
00:11:02.679      slat (nsec): min=8329, max=40202, avg=9530.58, stdev=1239.21
00:11:02.679      clat (usec): min=52, max=148, avg=66.79, stdev= 4.37
00:11:02.679       lat (usec): min=65, max=158, avg=76.32, stdev= 4.54
00:11:02.679      clat percentiles (usec):
00:11:02.679       |  1.00th=[   60],  5.00th=[   61], 10.00th=[   62], 20.00th=[   64],
00:11:02.679       | 30.00th=[   65], 40.00th=[   66], 50.00th=[   67], 60.00th=[   68],
00:11:02.679       | 70.00th=[   69], 80.00th=[   71], 90.00th=[   73], 95.00th=[   75],
00:11:02.679       | 99.00th=[   79], 99.50th=[   80], 99.90th=[   87], 99.95th=[  103],
00:11:02.679       | 99.99th=[  149]
00:11:02.679    write: IOPS=6467, BW=25.3MiB/s (26.5MB/s)(25.3MiB/1000msec); 0 zone resets
00:11:02.679      slat (nsec): min=10777, max=43319, avg=12189.34, stdev=1501.74
00:11:02.679      clat (usec): min=37, max=170, avg=64.22, stdev= 4.45
00:11:02.679       lat (usec): min=63, max=182, avg=76.41, stdev= 4.70
00:11:02.679      clat percentiles (usec):
00:11:02.679       |  1.00th=[   57],  5.00th=[   59], 10.00th=[   60], 20.00th=[   61],
00:11:02.679       | 30.00th=[   62], 40.00th=[   63], 50.00th=[   64], 60.00th=[   65],
00:11:02.679       | 70.00th=[   67], 80.00th=[   68], 90.00th=[   70], 95.00th=[   72],
00:11:02.679       | 99.00th=[   76], 99.50th=[   78], 99.90th=[   91], 99.95th=[  100],
00:11:02.679       | 99.99th=[  172]
00:11:02.679     bw (  KiB/s): min=25848, max=25848, per=99.92%, avg=25848.00, stdev= 0.00, samples=1
00:11:02.679     iops        : min= 6462, max= 6462, avg=6462.00, stdev= 0.00, samples=1
00:11:02.679    lat (usec)   : 50=0.02%, 100=99.92%, 250=0.06%
00:11:02.679    cpu          : usr=12.00%, sys=18.40%, ctx=12612, majf=0, minf=1
00:11:02.680    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:11:02.680       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:02.680       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:02.680       issued rwts: total=6144,6467,0,0 short=0,0,0,0 dropped=0,0,0,0
00:11:02.680       latency   : target=0, window=0, percentile=100.00%, depth=1
00:11:02.680  
00:11:02.680  Run status group 0 (all jobs):
00:11:02.680     READ: bw=24.0MiB/s (25.2MB/s), 24.0MiB/s-24.0MiB/s (25.2MB/s-25.2MB/s), io=24.0MiB (25.2MB), run=1000-1000msec
00:11:02.680    WRITE: bw=25.3MiB/s (26.5MB/s), 25.3MiB/s-25.3MiB/s (26.5MB/s-26.5MB/s), io=25.3MiB (26.5MB), run=1000-1000msec
00:11:02.680  
00:11:02.680  Disk stats (read/write):
00:11:02.680    nvme0n1: ios=5681/5698, merge=0/0, ticks=311/317, in_queue=628, util=90.48%
00:11:02.938   13:37:02 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:11:04.837  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s)
00:11:04.837   13:37:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:11:04.837   13:37:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0
00:11:04.837   13:37:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:11:04.837   13:37:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME
00:11:04.837   13:37:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:11:04.837   13:37:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME
00:11:04.837   13:37:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0
00:11:04.837   13:37:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT
00:11:04.837   13:37:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini
00:11:04.837   13:37:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup
00:11:04.837   13:37:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync
00:11:04.837   13:37:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' rdma == tcp ']'
00:11:04.837   13:37:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' rdma == rdma ']'
00:11:04.837   13:37:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e
00:11:04.837   13:37:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20}
00:11:04.837   13:37:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma
00:11:04.837  rmmod nvme_rdma
00:11:04.837  rmmod nvme_fabrics
00:11:04.837   13:37:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:11:04.837   13:37:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e
00:11:04.837   13:37:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0
00:11:04.837   13:37:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3206612 ']'
00:11:04.837   13:37:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3206612
00:11:04.837   13:37:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 3206612 ']'
00:11:04.837   13:37:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 3206612
00:11:04.837    13:37:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname
00:11:04.837   13:37:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:11:04.837    13:37:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3206612
00:11:04.837   13:37:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:11:04.837   13:37:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:11:04.837   13:37:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3206612'
00:11:04.837  killing process with pid 3206612
00:11:04.837   13:37:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 3206612
00:11:04.837   13:37:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 3206612
00:11:06.739   13:37:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:11:06.739   13:37:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]]
00:11:06.739  
00:11:06.739  real	0m17.960s
00:11:06.739  user	0m50.228s
00:11:06.739  sys	0m6.475s
00:11:06.739   13:37:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:06.739   13:37:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:11:06.739  ************************************
00:11:06.739  END TEST nvmf_nmic
00:11:06.739  ************************************
00:11:06.739   13:37:06 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma
00:11:06.739   13:37:06 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:11:06.739   13:37:06 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:06.739   13:37:06 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x
00:11:06.739  ************************************
00:11:06.739  START TEST nvmf_fio_target
00:11:06.739  ************************************
00:11:06.739   13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma
00:11:06.739  * Looking for test storage...
00:11:06.999  * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target
00:11:06.999    13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:11:06.999     13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version
00:11:06.999     13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:11:06.999    13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:11:06.999    13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:11:06.999    13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l
00:11:06.999    13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l
00:11:06.999    13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-:
00:11:06.999    13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1
00:11:06.999    13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-:
00:11:06.999    13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2
00:11:06.999    13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<'
00:11:06.999    13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2
00:11:06.999    13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1
00:11:06.999    13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:11:06.999    13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in
00:11:06.999    13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1
00:11:06.999    13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 ))
00:11:06.999    13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:11:06.999     13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1
00:11:06.999     13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1
00:11:06.999     13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:06.999     13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1
00:11:06.999    13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1
00:11:06.999     13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2
00:11:06.999     13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2
00:11:06.999     13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:11:06.999     13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2
00:11:06.999    13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2
00:11:06.999    13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:11:06.999    13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:11:06.999    13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0
00:11:06.999    13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:11:06.999    13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:11:06.999  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:06.999  		--rc genhtml_branch_coverage=1
00:11:06.999  		--rc genhtml_function_coverage=1
00:11:06.999  		--rc genhtml_legend=1
00:11:06.999  		--rc geninfo_all_blocks=1
00:11:06.999  		--rc geninfo_unexecuted_blocks=1
00:11:06.999  		
00:11:06.999  		'
00:11:06.999    13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:11:06.999  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:06.999  		--rc genhtml_branch_coverage=1
00:11:06.999  		--rc genhtml_function_coverage=1
00:11:06.999  		--rc genhtml_legend=1
00:11:06.999  		--rc geninfo_all_blocks=1
00:11:06.999  		--rc geninfo_unexecuted_blocks=1
00:11:06.999  		
00:11:06.999  		'
00:11:06.999    13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:11:06.999  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:06.999  		--rc genhtml_branch_coverage=1
00:11:06.999  		--rc genhtml_function_coverage=1
00:11:06.999  		--rc genhtml_legend=1
00:11:06.999  		--rc geninfo_all_blocks=1
00:11:06.999  		--rc geninfo_unexecuted_blocks=1
00:11:06.999  		
00:11:06.999  		'
00:11:06.999    13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:11:06.999  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:06.999  		--rc genhtml_branch_coverage=1
00:11:06.999  		--rc genhtml_function_coverage=1
00:11:06.999  		--rc genhtml_legend=1
00:11:06.999  		--rc geninfo_all_blocks=1
00:11:06.999  		--rc geninfo_unexecuted_blocks=1
00:11:06.999  		
00:11:06.999  		'
00:11:06.999   13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh
00:11:06.999     13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s
00:11:06.999    13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:11:06.999    13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:11:06.999    13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:11:06.999    13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:11:06.999    13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:11:06.999    13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:11:06.999    13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:11:06.999    13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:11:06.999    13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:11:06.999     13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:11:06.999    13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:11:06.999    13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e
00:11:06.999    13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:11:06.999    13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:11:06.999    13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:11:06.999    13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:11:06.999    13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh
00:11:06.999     13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob
00:11:06.999     13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:11:06.999     13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:11:06.999     13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:11:06.999      13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:06.999      13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:06.999      13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:06.999      13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH
00:11:06.999      13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:06.999    13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0
00:11:06.999    13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:11:06.999    13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:11:06.999    13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:11:06.999    13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:11:06.999    13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:11:06.999    13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:11:06.999  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:11:06.999    13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:11:06.999    13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:11:07.000    13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0
00:11:07.000   13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64
00:11:07.000   13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:11:07.000   13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py
00:11:07.000   13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit
00:11:07.000   13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z rdma ']'
00:11:07.000   13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:11:07.000   13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs
00:11:07.000   13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no
00:11:07.000   13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns
00:11:07.000   13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:11:07.000   13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:11:07.000    13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:11:07.000   13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:11:07.000   13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:11:07.000   13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable
00:11:07.000   13:37:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x
00:11:13.565   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:11:13.565   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=()
00:11:13.565   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs
00:11:13.565   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=()
00:11:13.565   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:11:13.565   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=()
00:11:13.565   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers
00:11:13.565   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=()
00:11:13.565   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs
00:11:13.565   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=()
00:11:13.565   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810
00:11:13.565   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=()
00:11:13.565   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722
00:11:13.565   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=()
00:11:13.565   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx
00:11:13.565   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:11:13.565   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:11:13.565   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:11:13.565   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:11:13.565   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:11:13.565   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:11:13.565   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:11:13.565   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:11:13.565   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:11:13.565   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:11:13.565   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:11:13.565   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:11:13.565   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:11:13.565   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]]
00:11:13.565   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}")
00:11:13.565   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}")
00:11:13.565   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]]
00:11:13.565   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}")
00:11:13.565   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:11:13.565   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:11:13.565   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)'
00:11:13.565  Found 0000:d9:00.0 (0x15b3 - 0x1015)
00:11:13.565   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:11:13.566   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:11:13.566   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:11:13.566   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:11:13.566   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:11:13.566   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:11:13.566   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:11:13.566   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)'
00:11:13.566  Found 0000:d9:00.1 (0x15b3 - 0x1015)
00:11:13.566   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:11:13.566   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:11:13.566   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:11:13.566   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:11:13.566   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:11:13.566   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:11:13.566   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:11:13.566   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]]
00:11:13.566   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:11:13.566   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:11:13.566   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:11:13.566   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:11:13.566   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:11:13.566   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0'
00:11:13.566  Found net devices under 0000:d9:00.0: mlx_0_0
00:11:13.566   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:11:13.566   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:11:13.566   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:11:13.566   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:11:13.566   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:11:13.566   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:11:13.566   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1'
00:11:13.566  Found net devices under 0000:d9:00.1: mlx_0_1
00:11:13.566   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:11:13.566   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:11:13.566   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes
00:11:13.566   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:11:13.566   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ rdma == tcp ]]
00:11:13.566   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@447 -- # [[ rdma == rdma ]]
00:11:13.566   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # rdma_device_init
00:11:13.566   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@529 -- # load_ib_rdma_modules
00:11:13.566    13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@62 -- # uname
00:11:13.566   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']'
00:11:13.566   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@66 -- # modprobe ib_cm
00:11:13.566   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@67 -- # modprobe ib_core
00:11:13.826   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@68 -- # modprobe ib_umad
00:11:13.826   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs
00:11:13.826   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@70 -- # modprobe iw_cm
00:11:13.826   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@71 -- # modprobe rdma_cm
00:11:13.826   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm
00:11:13.826   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@530 -- # allocate_nic_ips
00:11:13.826   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR ))
00:11:13.826    13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # get_rdma_if_list
00:11:13.826    13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:11:13.826    13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:11:13.826     13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:11:13.826     13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:11:13.826    13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:11:13.826    13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:11:13.826    13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:11:13.826    13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:11:13.826    13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_0
00:11:13.826    13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2
00:11:13.826    13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:11:13.826    13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:11:13.826    13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:11:13.826    13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:11:13.826    13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:11:13.826    13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_1
00:11:13.826    13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2
00:11:13.826   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:11:13.826    13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0
00:11:13.826    13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:11:13.826    13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:11:13.826    13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}'
00:11:13.826    13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1
00:11:13.826   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # ip=192.168.100.8
00:11:13.826   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]]
00:11:13.826   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0
00:11:13.826  6: mlx_0_0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:11:13.826      link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff
00:11:13.826      altname enp217s0f0np0
00:11:13.826      altname ens818f0np0
00:11:13.826      inet 192.168.100.8/24 scope global mlx_0_0
00:11:13.826         valid_lft forever preferred_lft forever
00:11:13.826   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:11:13.826    13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1
00:11:13.826    13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:11:13.826    13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:11:13.826    13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}'
00:11:13.826    13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1
00:11:13.826   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # ip=192.168.100.9
00:11:13.826   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]]
00:11:13.826   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1
00:11:13.826  7: mlx_0_1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:11:13.826      link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff
00:11:13.826      altname enp217s0f1np1
00:11:13.826      altname ens818f1np1
00:11:13.826      inet 192.168.100.9/24 scope global mlx_0_1
00:11:13.826         valid_lft forever preferred_lft forever
00:11:13.826   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0
00:11:13.826   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:11:13.826   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma'
00:11:13.826   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]]
00:11:13.826    13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # get_available_rdma_ips
00:11:13.826     13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # get_rdma_if_list
00:11:13.826     13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:11:13.826     13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:11:13.826      13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:11:13.826      13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:11:13.826     13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:11:13.826     13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:11:13.826     13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:11:13.826     13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:11:13.826     13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_0
00:11:13.826     13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2
00:11:13.826     13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:11:13.826     13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:11:13.826     13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:11:13.826     13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:11:13.826     13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:11:13.826     13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_1
00:11:13.826     13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2
00:11:13.826    13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:11:13.826    13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0
00:11:13.826    13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:11:13.826    13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:11:13.826    13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}'
00:11:13.827    13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1
00:11:13.827    13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:11:13.827    13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1
00:11:13.827    13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:11:13.827    13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:11:13.827    13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}'
00:11:13.827    13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1
00:11:13.827   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8
00:11:13.827  192.168.100.9'
00:11:13.827    13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # echo '192.168.100.8
00:11:13.827  192.168.100.9'
00:11:13.827    13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # head -n 1
00:11:13.827   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8
00:11:13.827    13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # echo '192.168.100.8
00:11:13.827  192.168.100.9'
00:11:13.827    13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # tail -n +2
00:11:13.827    13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # head -n 1
00:11:13.827   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9
00:11:13.827   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']'
00:11:13.827   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024'
00:11:13.827   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' rdma == tcp ']'
00:11:13.827   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' rdma == rdma ']'
00:11:13.827   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-rdma
00:11:13.827   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF
00:11:13.827   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:11:13.827   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable
00:11:13.827   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x
00:11:13.827   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3211856
00:11:13.827   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:11:13.827   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3211856
00:11:13.827   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 3211856 ']'
00:11:13.827   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:11:13.827   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100
00:11:13.827   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:11:13.827  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:11:13.827   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable
00:11:13.827   13:37:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x
00:11:14.086  [2024-12-14 13:37:13.624190] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:11:14.086  [2024-12-14 13:37:13.624284] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:11:14.086  [2024-12-14 13:37:13.758097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:11:14.344  [2024-12-14 13:37:13.860157] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:11:14.344  [2024-12-14 13:37:13.860205] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:11:14.344  [2024-12-14 13:37:13.860223] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:11:14.344  [2024-12-14 13:37:13.860241] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:11:14.344  [2024-12-14 13:37:13.860253] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:11:14.344  [2024-12-14 13:37:13.862885] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:11:14.344  [2024-12-14 13:37:13.862963] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:11:14.344  [2024-12-14 13:37:13.863003] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:11:14.345  [2024-12-14 13:37:13.863009] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:11:14.911   13:37:14 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:11:14.911   13:37:14 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0
00:11:14.911   13:37:14 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:11:14.911   13:37:14 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable
00:11:14.911   13:37:14 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x
00:11:14.911   13:37:14 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:11:14.911   13:37:14 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192
00:11:15.169  [2024-12-14 13:37:14.679176] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028540/0x7f91d53bd940) succeed.
00:11:15.169  [2024-12-14 13:37:14.688616] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000286c0/0x7f91d5379940) succeed.
00:11:15.428    13:37:14 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:11:15.686   13:37:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 '
00:11:15.686    13:37:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:11:15.944   13:37:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1
00:11:15.944    13:37:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:11:16.202   13:37:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 '
00:11:16.202    13:37:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:11:16.460   13:37:16 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3
00:11:16.460   13:37:16 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3'
00:11:16.718    13:37:16 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:11:16.976   13:37:16 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 '
00:11:16.976    13:37:16 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:11:17.235   13:37:16 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 '
00:11:17.235    13:37:16 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:11:17.493   13:37:16 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6
00:11:17.493   13:37:16 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6'
00:11:17.493   13:37:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME
00:11:17.751   13:37:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs
00:11:17.751   13:37:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:11:18.009   13:37:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs
00:11:18.009   13:37:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1
00:11:18.267   13:37:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420
00:11:18.267  [2024-12-14 13:37:17.962344] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 ***
00:11:18.267   13:37:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0
00:11:18.525   13:37:18 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0
00:11:18.783   13:37:18 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420
00:11:19.717   13:37:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4
00:11:19.717   13:37:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0
00:11:19.717   13:37:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:11:19.717   13:37:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]]
00:11:19.717   13:37:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4
00:11:19.717   13:37:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2
00:11:22.245   13:37:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:11:22.245    13:37:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:11:22.245    13:37:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:11:22.245   13:37:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4
00:11:22.245   13:37:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:11:22.245   13:37:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0
00:11:22.245   13:37:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v
00:11:22.245  [global]
00:11:22.245  thread=1
00:11:22.245  invalidate=1
00:11:22.245  rw=write
00:11:22.245  time_based=1
00:11:22.245  runtime=1
00:11:22.245  ioengine=libaio
00:11:22.245  direct=1
00:11:22.245  bs=4096
00:11:22.245  iodepth=1
00:11:22.245  norandommap=0
00:11:22.245  numjobs=1
00:11:22.245  
00:11:22.245  verify_dump=1
00:11:22.245  verify_backlog=512
00:11:22.245  verify_state_save=0
00:11:22.245  do_verify=1
00:11:22.245  verify=crc32c-intel
00:11:22.245  [job0]
00:11:22.245  filename=/dev/nvme0n1
00:11:22.245  [job1]
00:11:22.245  filename=/dev/nvme0n2
00:11:22.245  [job2]
00:11:22.245  filename=/dev/nvme0n3
00:11:22.245  [job3]
00:11:22.245  filename=/dev/nvme0n4
00:11:22.245  Could not set queue depth (nvme0n1)
00:11:22.245  Could not set queue depth (nvme0n2)
00:11:22.245  Could not set queue depth (nvme0n3)
00:11:22.245  Could not set queue depth (nvme0n4)
00:11:22.245  job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:11:22.245  job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:11:22.245  job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:11:22.245  job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:11:22.245  fio-3.35
00:11:22.245  Starting 4 threads
00:11:23.653  
00:11:23.653  job0: (groupid=0, jobs=1): err= 0: pid=3213631: Sat Dec 14 13:37:23 2024
00:11:23.653    read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec)
00:11:23.653      slat (nsec): min=8287, max=34981, avg=9183.98, stdev=1158.61
00:11:23.653      clat (usec): min=76, max=187, avg=124.49, stdev=10.62
00:11:23.653       lat (usec): min=85, max=195, avg=133.67, stdev=10.58
00:11:23.653      clat percentiles (usec):
00:11:23.653       |  1.00th=[   96],  5.00th=[  109], 10.00th=[  113], 20.00th=[  118],
00:11:23.653       | 30.00th=[  121], 40.00th=[  123], 50.00th=[  125], 60.00th=[  127],
00:11:23.653       | 70.00th=[  129], 80.00th=[  133], 90.00th=[  137], 95.00th=[  141],
00:11:23.653       | 99.00th=[  161], 99.50th=[  169], 99.90th=[  176], 99.95th=[  182],
00:11:23.653       | 99.99th=[  188]
00:11:23.653    write: IOPS=3959, BW=15.5MiB/s (16.2MB/s)(15.5MiB/1001msec); 0 zone resets
00:11:23.654      slat (nsec): min=10406, max=69137, avg=11410.76, stdev=1336.79
00:11:23.654      clat (usec): min=68, max=350, avg=115.24, stdev=11.50
00:11:23.654       lat (usec): min=79, max=361, avg=126.65, stdev=11.58
00:11:23.654      clat percentiles (usec):
00:11:23.654       |  1.00th=[   87],  5.00th=[   99], 10.00th=[  103], 20.00th=[  108],
00:11:23.654       | 30.00th=[  112], 40.00th=[  114], 50.00th=[  116], 60.00th=[  118],
00:11:23.654       | 70.00th=[  120], 80.00th=[  123], 90.00th=[  127], 95.00th=[  131],
00:11:23.654       | 99.00th=[  153], 99.50th=[  159], 99.90th=[  169], 99.95th=[  178],
00:11:23.654       | 99.99th=[  351]
00:11:23.654     bw (  KiB/s): min=16384, max=16384, per=24.10%, avg=16384.00, stdev= 0.00, samples=1
00:11:23.654     iops        : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1
00:11:23.654    lat (usec)   : 100=4.03%, 250=95.96%, 500=0.01%
00:11:23.654    cpu          : usr=4.80%, sys=11.30%, ctx=7548, majf=0, minf=1
00:11:23.654    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:11:23.654       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:23.654       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:23.654       issued rwts: total=3584,3963,0,0 short=0,0,0,0 dropped=0,0,0,0
00:11:23.654       latency   : target=0, window=0, percentile=100.00%, depth=1
00:11:23.654  job1: (groupid=0, jobs=1): err= 0: pid=3213637: Sat Dec 14 13:37:23 2024
00:11:23.654    read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec)
00:11:23.654      slat (nsec): min=8341, max=20113, avg=9228.40, stdev=830.53
00:11:23.654      clat (usec): min=76, max=189, avg=124.54, stdev=10.69
00:11:23.654       lat (usec): min=85, max=197, avg=133.77, stdev=10.64
00:11:23.654      clat percentiles (usec):
00:11:23.654       |  1.00th=[   98],  5.00th=[  109], 10.00th=[  113], 20.00th=[  118],
00:11:23.654       | 30.00th=[  121], 40.00th=[  123], 50.00th=[  125], 60.00th=[  127],
00:11:23.654       | 70.00th=[  130], 80.00th=[  133], 90.00th=[  137], 95.00th=[  141],
00:11:23.654       | 99.00th=[  161], 99.50th=[  169], 99.90th=[  180], 99.95th=[  184],
00:11:23.654       | 99.99th=[  190]
00:11:23.654    write: IOPS=3960, BW=15.5MiB/s (16.2MB/s)(15.5MiB/1001msec); 0 zone resets
00:11:23.654      slat (nsec): min=10380, max=37472, avg=11425.49, stdev=1041.45
00:11:23.654      clat (usec): min=69, max=358, avg=115.20, stdev=11.42
00:11:23.654       lat (usec): min=85, max=369, avg=126.63, stdev=11.46
00:11:23.654      clat percentiles (usec):
00:11:23.654       |  1.00th=[   87],  5.00th=[   99], 10.00th=[  102], 20.00th=[  108],
00:11:23.654       | 30.00th=[  111], 40.00th=[  114], 50.00th=[  116], 60.00th=[  118],
00:11:23.654       | 70.00th=[  120], 80.00th=[  123], 90.00th=[  127], 95.00th=[  131],
00:11:23.654       | 99.00th=[  153], 99.50th=[  157], 99.90th=[  172], 99.95th=[  178],
00:11:23.654       | 99.99th=[  359]
00:11:23.654     bw (  KiB/s): min=16384, max=16384, per=24.10%, avg=16384.00, stdev= 0.00, samples=1
00:11:23.654     iops        : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1
00:11:23.654    lat (usec)   : 100=4.13%, 250=95.85%, 500=0.01%
00:11:23.654    cpu          : usr=6.60%, sys=9.50%, ctx=7549, majf=0, minf=1
00:11:23.654    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:11:23.654       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:23.654       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:23.654       issued rwts: total=3584,3964,0,0 short=0,0,0,0 dropped=0,0,0,0
00:11:23.654       latency   : target=0, window=0, percentile=100.00%, depth=1
00:11:23.654  job2: (groupid=0, jobs=1): err= 0: pid=3213638: Sat Dec 14 13:37:23 2024
00:11:23.654    read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec)
00:11:23.654      slat (nsec): min=8541, max=34897, avg=9265.29, stdev=1020.56
00:11:23.654      clat (usec): min=81, max=265, avg=106.77, stdev=12.33
00:11:23.654       lat (usec): min=94, max=274, avg=116.03, stdev=12.48
00:11:23.654      clat percentiles (usec):
00:11:23.654       |  1.00th=[   91],  5.00th=[   94], 10.00th=[   96], 20.00th=[   98],
00:11:23.654       | 30.00th=[  100], 40.00th=[  102], 50.00th=[  104], 60.00th=[  106],
00:11:23.654       | 70.00th=[  109], 80.00th=[  113], 90.00th=[  124], 95.00th=[  135],
00:11:23.654       | 99.00th=[  145], 99.50th=[  151], 99.90th=[  174], 99.95th=[  178],
00:11:23.654       | 99.99th=[  265]
00:11:23.654    write: IOPS=4472, BW=17.5MiB/s (18.3MB/s)(17.5MiB/1001msec); 0 zone resets
00:11:23.654      slat (nsec): min=8617, max=35324, avg=11613.01, stdev=1203.24
00:11:23.654      clat (usec): min=79, max=184, avg=100.72, stdev=12.96
00:11:23.654       lat (usec): min=90, max=196, avg=112.33, stdev=13.08
00:11:23.654      clat percentiles (usec):
00:11:23.654       |  1.00th=[   86],  5.00th=[   88], 10.00th=[   90], 20.00th=[   92],
00:11:23.654       | 30.00th=[   94], 40.00th=[   96], 50.00th=[   97], 60.00th=[   99],
00:11:23.654       | 70.00th=[  102], 80.00th=[  105], 90.00th=[  117], 95.00th=[  135],
00:11:23.654       | 99.00th=[  143], 99.50th=[  149], 99.90th=[  176], 99.95th=[  184],
00:11:23.654       | 99.99th=[  186]
00:11:23.654     bw (  KiB/s): min=18696, max=18696, per=27.50%, avg=18696.00, stdev= 0.00, samples=1
00:11:23.654     iops        : min= 4674, max= 4674, avg=4674.00, stdev= 0.00, samples=1
00:11:23.654    lat (usec)   : 100=47.89%, 250=52.09%, 500=0.01%
00:11:23.654    cpu          : usr=7.80%, sys=10.60%, ctx=8573, majf=0, minf=2
00:11:23.654    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:11:23.654       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:23.654       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:23.654       issued rwts: total=4096,4477,0,0 short=0,0,0,0 dropped=0,0,0,0
00:11:23.654       latency   : target=0, window=0, percentile=100.00%, depth=1
00:11:23.654  job3: (groupid=0, jobs=1): err= 0: pid=3213639: Sat Dec 14 13:37:23 2024
00:11:23.654    read: IOPS=4451, BW=17.4MiB/s (18.2MB/s)(17.4MiB/1001msec)
00:11:23.654      slat (nsec): min=8496, max=35301, avg=9085.26, stdev=896.25
00:11:23.654      clat (usec): min=80, max=192, avg=99.62, stdev=12.71
00:11:23.654       lat (usec): min=90, max=201, avg=108.71, stdev=12.78
00:11:23.654      clat percentiles (usec):
00:11:23.654       |  1.00th=[   86],  5.00th=[   88], 10.00th=[   90], 20.00th=[   92],
00:11:23.654       | 30.00th=[   94], 40.00th=[   95], 50.00th=[   97], 60.00th=[   98],
00:11:23.654       | 70.00th=[  100], 80.00th=[  103], 90.00th=[  111], 95.00th=[  135],
00:11:23.654       | 99.00th=[  145], 99.50th=[  147], 99.90th=[  180], 99.95th=[  186],
00:11:23.654       | 99.99th=[  192]
00:11:23.654    write: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec); 0 zone resets
00:11:23.654      slat (nsec): min=10597, max=39058, avg=11544.04, stdev=983.46
00:11:23.654      clat (usec): min=78, max=180, avg=95.25, stdev=13.31
00:11:23.654       lat (usec): min=89, max=192, avg=106.79, stdev=13.41
00:11:23.654      clat percentiles (usec):
00:11:23.654       |  1.00th=[   82],  5.00th=[   84], 10.00th=[   86], 20.00th=[   88],
00:11:23.654       | 30.00th=[   89], 40.00th=[   91], 50.00th=[   92], 60.00th=[   94],
00:11:23.654       | 70.00th=[   96], 80.00th=[   98], 90.00th=[  105], 95.00th=[  135],
00:11:23.654       | 99.00th=[  145], 99.50th=[  147], 99.90th=[  176], 99.95th=[  180],
00:11:23.654       | 99.99th=[  182]
00:11:23.654     bw (  KiB/s): min=20480, max=20480, per=30.13%, avg=20480.00, stdev= 0.00, samples=1
00:11:23.654     iops        : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1
00:11:23.654    lat (usec)   : 100=77.52%, 250=22.48%
00:11:23.654    cpu          : usr=6.10%, sys=13.30%, ctx=9064, majf=0, minf=1
00:11:23.654    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:11:23.654       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:23.654       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:23.654       issued rwts: total=4456,4608,0,0 short=0,0,0,0 dropped=0,0,0,0
00:11:23.654       latency   : target=0, window=0, percentile=100.00%, depth=1
00:11:23.654  
00:11:23.654  Run status group 0 (all jobs):
00:11:23.654     READ: bw=61.3MiB/s (64.3MB/s), 14.0MiB/s-17.4MiB/s (14.7MB/s-18.2MB/s), io=61.4MiB (64.4MB), run=1001-1001msec
00:11:23.654    WRITE: bw=66.4MiB/s (69.6MB/s), 15.5MiB/s-18.0MiB/s (16.2MB/s-18.9MB/s), io=66.5MiB (69.7MB), run=1001-1001msec
00:11:23.654  
00:11:23.654  Disk stats (read/write):
00:11:23.654    nvme0n1: ios=3121/3221, merge=0/0, ticks=359/333, in_queue=692, util=84.47%
00:11:23.654    nvme0n2: ios=3072/3220, merge=0/0, ticks=364/339, in_queue=703, util=85.41%
00:11:23.654    nvme0n3: ios=3584/3761, merge=0/0, ticks=342/333, in_queue=675, util=88.57%
00:11:23.654    nvme0n4: ios=3704/4096, merge=0/0, ticks=328/343, in_queue=671, util=89.52%
00:11:23.654   13:37:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v
00:11:23.654  [global]
00:11:23.654  thread=1
00:11:23.654  invalidate=1
00:11:23.654  rw=randwrite
00:11:23.654  time_based=1
00:11:23.654  runtime=1
00:11:23.654  ioengine=libaio
00:11:23.654  direct=1
00:11:23.654  bs=4096
00:11:23.654  iodepth=1
00:11:23.654  norandommap=0
00:11:23.654  numjobs=1
00:11:23.654  
00:11:23.654  verify_dump=1
00:11:23.654  verify_backlog=512
00:11:23.654  verify_state_save=0
00:11:23.654  do_verify=1
00:11:23.654  verify=crc32c-intel
00:11:23.654  [job0]
00:11:23.654  filename=/dev/nvme0n1
00:11:23.654  [job1]
00:11:23.654  filename=/dev/nvme0n2
00:11:23.654  [job2]
00:11:23.654  filename=/dev/nvme0n3
00:11:23.654  [job3]
00:11:23.654  filename=/dev/nvme0n4
00:11:23.654  Could not set queue depth (nvme0n1)
00:11:23.654  Could not set queue depth (nvme0n2)
00:11:23.654  Could not set queue depth (nvme0n3)
00:11:23.654  Could not set queue depth (nvme0n4)
00:11:23.915  job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:11:23.915  job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:11:23.915  job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:11:23.915  job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:11:23.915  fio-3.35
00:11:23.915  Starting 4 threads
00:11:25.300  
00:11:25.300  job0: (groupid=0, jobs=1): err= 0: pid=3214062: Sat Dec 14 13:37:24 2024
00:11:25.300    read: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec)
00:11:25.300      slat (nsec): min=3074, max=22631, avg=5296.90, stdev=1692.14
00:11:25.300      clat (usec): min=71, max=138, avg=89.40, stdev= 6.07
00:11:25.300       lat (usec): min=76, max=143, avg=94.70, stdev= 6.45
00:11:25.300      clat percentiles (usec):
00:11:25.300       |  1.00th=[   78],  5.00th=[   81], 10.00th=[   83], 20.00th=[   85],
00:11:25.300       | 30.00th=[   87], 40.00th=[   88], 50.00th=[   89], 60.00th=[   91],
00:11:25.300       | 70.00th=[   92], 80.00th=[   94], 90.00th=[   98], 95.00th=[  100],
00:11:25.300       | 99.00th=[  106], 99.50th=[  110], 99.90th=[  115], 99.95th=[  118],
00:11:25.300       | 99.99th=[  139]
00:11:25.300    write: IOPS=5496, BW=21.5MiB/s (22.5MB/s)(21.5MiB/1001msec); 0 zone resets
00:11:25.300      slat (nsec): min=3775, max=70487, avg=6424.60, stdev=2969.54
00:11:25.300      clat (usec): min=65, max=249, avg=84.84, stdev= 6.78
00:11:25.300       lat (usec): min=70, max=254, avg=91.26, stdev= 7.78
00:11:25.300      clat percentiles (usec):
00:11:25.300       |  1.00th=[   73],  5.00th=[   76], 10.00th=[   78], 20.00th=[   80],
00:11:25.300       | 30.00th=[   82], 40.00th=[   83], 50.00th=[   85], 60.00th=[   86],
00:11:25.300       | 70.00th=[   88], 80.00th=[   90], 90.00th=[   93], 95.00th=[   96],
00:11:25.300       | 99.00th=[  103], 99.50th=[  106], 99.90th=[  121], 99.95th=[  133],
00:11:25.300       | 99.99th=[  249]
00:11:25.300     bw (  KiB/s): min=21272, max=21272, per=37.79%, avg=21272.00, stdev= 0.00, samples=1
00:11:25.300     iops        : min= 5318, max= 5318, avg=5318.00, stdev= 0.00, samples=1
00:11:25.300    lat (usec)   : 100=96.45%, 250=3.55%
00:11:25.300    cpu          : usr=3.90%, sys=8.80%, ctx=10623, majf=0, minf=1
00:11:25.300    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:11:25.300       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:25.300       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:25.300       issued rwts: total=5120,5502,0,0 short=0,0,0,0 dropped=0,0,0,0
00:11:25.300       latency   : target=0, window=0, percentile=100.00%, depth=1
00:11:25.300  job1: (groupid=0, jobs=1): err= 0: pid=3214065: Sat Dec 14 13:37:24 2024
00:11:25.300    read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec)
00:11:25.300      slat (nsec): min=8452, max=31594, avg=9172.64, stdev=892.61
00:11:25.300      clat (usec): min=87, max=261, avg=177.55, stdev=14.11
00:11:25.300       lat (usec): min=96, max=270, avg=186.72, stdev=14.12
00:11:25.300      clat percentiles (usec):
00:11:25.300       |  1.00th=[  121],  5.00th=[  163], 10.00th=[  167], 20.00th=[  172],
00:11:25.300       | 30.00th=[  174], 40.00th=[  176], 50.00th=[  178], 60.00th=[  180],
00:11:25.300       | 70.00th=[  182], 80.00th=[  186], 90.00th=[  190], 95.00th=[  194],
00:11:25.300       | 99.00th=[  231], 99.50th=[  237], 99.90th=[  249], 99.95th=[  253],
00:11:25.300       | 99.99th=[  262]
00:11:25.300    write: IOPS=2874, BW=11.2MiB/s (11.8MB/s)(11.2MiB/1001msec); 0 zone resets
00:11:25.300      slat (nsec): min=10293, max=43534, avg=11393.88, stdev=1267.39
00:11:25.300      clat (usec): min=81, max=251, avg=165.23, stdev=16.68
00:11:25.300       lat (usec): min=93, max=262, avg=176.62, stdev=16.72
00:11:25.300      clat percentiles (usec):
00:11:25.300       |  1.00th=[  103],  5.00th=[  147], 10.00th=[  153], 20.00th=[  159],
00:11:25.300       | 30.00th=[  161], 40.00th=[  163], 50.00th=[  165], 60.00th=[  167],
00:11:25.300       | 70.00th=[  169], 80.00th=[  174], 90.00th=[  178], 95.00th=[  184],
00:11:25.300       | 99.00th=[  221], 99.50th=[  229], 99.90th=[  241], 99.95th=[  243],
00:11:25.300       | 99.99th=[  251]
00:11:25.300     bw (  KiB/s): min=12288, max=12288, per=21.83%, avg=12288.00, stdev= 0.00, samples=1
00:11:25.300     iops        : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1
00:11:25.300    lat (usec)   : 100=0.44%, 250=99.50%, 500=0.06%
00:11:25.300    cpu          : usr=4.80%, sys=7.00%, ctx=5437, majf=0, minf=1
00:11:25.300    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:11:25.300       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:25.300       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:25.300       issued rwts: total=2560,2877,0,0 short=0,0,0,0 dropped=0,0,0,0
00:11:25.300       latency   : target=0, window=0, percentile=100.00%, depth=1
00:11:25.300  job2: (groupid=0, jobs=1): err= 0: pid=3214066: Sat Dec 14 13:37:24 2024
00:11:25.300    read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec)
00:11:25.300      slat (nsec): min=8638, max=25955, avg=9522.46, stdev=937.67
00:11:25.300      clat (usec): min=88, max=250, avg=177.48, stdev=16.05
00:11:25.300       lat (usec): min=97, max=260, avg=187.00, stdev=16.02
00:11:25.300      clat percentiles (usec):
00:11:25.300       |  1.00th=[  110],  5.00th=[  159], 10.00th=[  167], 20.00th=[  172],
00:11:25.300       | 30.00th=[  174], 40.00th=[  176], 50.00th=[  178], 60.00th=[  180],
00:11:25.300       | 70.00th=[  184], 80.00th=[  186], 90.00th=[  190], 95.00th=[  196],
00:11:25.300       | 99.00th=[  233], 99.50th=[  239], 99.90th=[  245], 99.95th=[  245],
00:11:25.300       | 99.99th=[  251]
00:11:25.300    write: IOPS=2869, BW=11.2MiB/s (11.8MB/s)(11.2MiB/1001msec); 0 zone resets
00:11:25.300      slat (nsec): min=8700, max=39741, avg=11625.76, stdev=1322.25
00:11:25.300      clat (usec): min=85, max=250, avg=164.94, stdev=18.59
00:11:25.300       lat (usec): min=96, max=262, avg=176.57, stdev=18.62
00:11:25.300      clat percentiles (usec):
00:11:25.300       |  1.00th=[   99],  5.00th=[  141], 10.00th=[  151], 20.00th=[  157],
00:11:25.300       | 30.00th=[  161], 40.00th=[  163], 50.00th=[  165], 60.00th=[  167],
00:11:25.300       | 70.00th=[  172], 80.00th=[  174], 90.00th=[  180], 95.00th=[  192],
00:11:25.300       | 99.00th=[  223], 99.50th=[  229], 99.90th=[  249], 99.95th=[  249],
00:11:25.300       | 99.99th=[  251]
00:11:25.300     bw (  KiB/s): min=12288, max=12288, per=21.83%, avg=12288.00, stdev= 0.00, samples=1
00:11:25.300     iops        : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1
00:11:25.300    lat (usec)   : 100=0.83%, 250=99.13%, 500=0.04%
00:11:25.300    cpu          : usr=5.10%, sys=6.80%, ctx=5432, majf=0, minf=1
00:11:25.300    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:11:25.300       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:25.300       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:25.300       issued rwts: total=2560,2872,0,0 short=0,0,0,0 dropped=0,0,0,0
00:11:25.300       latency   : target=0, window=0, percentile=100.00%, depth=1
00:11:25.300  job3: (groupid=0, jobs=1): err= 0: pid=3214067: Sat Dec 14 13:37:24 2024
00:11:25.300    read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec)
00:11:25.300      slat (nsec): min=9354, max=25466, avg=10712.93, stdev=1129.06
00:11:25.300      clat (usec): min=93, max=254, avg=177.19, stdev=13.91
00:11:25.300       lat (usec): min=104, max=265, avg=187.90, stdev=13.90
00:11:25.300      clat percentiles (usec):
00:11:25.300       |  1.00th=[  110],  5.00th=[  163], 10.00th=[  165], 20.00th=[  169],
00:11:25.300       | 30.00th=[  174], 40.00th=[  176], 50.00th=[  178], 60.00th=[  180],
00:11:25.300       | 70.00th=[  182], 80.00th=[  184], 90.00th=[  190], 95.00th=[  196],
00:11:25.300       | 99.00th=[  225], 99.50th=[  231], 99.90th=[  245], 99.95th=[  249],
00:11:25.300       | 99.99th=[  255]
00:11:25.300    write: IOPS=2833, BW=11.1MiB/s (11.6MB/s)(11.1MiB/1001msec); 0 zone resets
00:11:25.300      slat (nsec): min=10968, max=42925, avg=12281.77, stdev=1440.71
00:11:25.301      clat (usec): min=85, max=236, avg=166.08, stdev=15.75
00:11:25.301       lat (usec): min=98, max=254, avg=178.36, stdev=15.83
00:11:25.301      clat percentiles (usec):
00:11:25.301       |  1.00th=[  100],  5.00th=[  151], 10.00th=[  155], 20.00th=[  159],
00:11:25.301       | 30.00th=[  161], 40.00th=[  163], 50.00th=[  165], 60.00th=[  167],
00:11:25.301       | 70.00th=[  172], 80.00th=[  174], 90.00th=[  180], 95.00th=[  192],
00:11:25.301       | 99.00th=[  217], 99.50th=[  227], 99.90th=[  237], 99.95th=[  237],
00:11:25.301       | 99.99th=[  237]
00:11:25.301     bw (  KiB/s): min=12288, max=12288, per=21.83%, avg=12288.00, stdev= 0.00, samples=1
00:11:25.301     iops        : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1
00:11:25.301    lat (usec)   : 100=0.56%, 250=99.43%, 500=0.02%
00:11:25.301    cpu          : usr=4.80%, sys=8.30%, ctx=5396, majf=0, minf=1
00:11:25.301    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:11:25.301       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:25.301       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:25.301       issued rwts: total=2560,2836,0,0 short=0,0,0,0 dropped=0,0,0,0
00:11:25.301       latency   : target=0, window=0, percentile=100.00%, depth=1
00:11:25.301  
00:11:25.301  Run status group 0 (all jobs):
00:11:25.301     READ: bw=49.9MiB/s (52.4MB/s), 9.99MiB/s-20.0MiB/s (10.5MB/s-20.9MB/s), io=50.0MiB (52.4MB), run=1001-1001msec
00:11:25.301    WRITE: bw=55.0MiB/s (57.6MB/s), 11.1MiB/s-21.5MiB/s (11.6MB/s-22.5MB/s), io=55.0MiB (57.7MB), run=1001-1001msec
00:11:25.301  
00:11:25.301  Disk stats (read/write):
00:11:25.301    nvme0n1: ios=4277/4608, merge=0/0, ticks=368/356, in_queue=724, util=84.74%
00:11:25.301    nvme0n2: ios=2048/2493, merge=0/0, ticks=337/378, in_queue=715, util=85.29%
00:11:25.301    nvme0n3: ios=2048/2487, merge=0/0, ticks=344/377, in_queue=721, util=88.45%
00:11:25.301    nvme0n4: ios=2048/2456, merge=0/0, ticks=347/381, in_queue=728, util=89.60%
00:11:25.301   13:37:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v
00:11:25.301  [global]
00:11:25.301  thread=1
00:11:25.301  invalidate=1
00:11:25.301  rw=write
00:11:25.301  time_based=1
00:11:25.301  runtime=1
00:11:25.301  ioengine=libaio
00:11:25.301  direct=1
00:11:25.301  bs=4096
00:11:25.301  iodepth=128
00:11:25.301  norandommap=0
00:11:25.301  numjobs=1
00:11:25.301  
00:11:25.301  verify_dump=1
00:11:25.301  verify_backlog=512
00:11:25.301  verify_state_save=0
00:11:25.301  do_verify=1
00:11:25.301  verify=crc32c-intel
00:11:25.301  [job0]
00:11:25.301  filename=/dev/nvme0n1
00:11:25.301  [job1]
00:11:25.301  filename=/dev/nvme0n2
00:11:25.301  [job2]
00:11:25.301  filename=/dev/nvme0n3
00:11:25.301  [job3]
00:11:25.301  filename=/dev/nvme0n4
00:11:25.301  Could not set queue depth (nvme0n1)
00:11:25.301  Could not set queue depth (nvme0n2)
00:11:25.301  Could not set queue depth (nvme0n3)
00:11:25.301  Could not set queue depth (nvme0n4)
00:11:25.557  job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:11:25.557  job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:11:25.557  job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:11:25.557  job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:11:25.557  fio-3.35
00:11:25.557  Starting 4 threads
00:11:26.927  
00:11:26.927  job0: (groupid=0, jobs=1): err= 0: pid=3214494: Sat Dec 14 13:37:26 2024
00:11:26.927    read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec)
00:11:26.927      slat (usec): min=2, max=1260, avg=105.84, stdev=269.74
00:11:26.927      clat (usec): min=11961, max=17017, avg=13704.09, stdev=718.11
00:11:26.927       lat (usec): min=11981, max=17021, avg=13809.92, stdev=712.72
00:11:26.927      clat percentiles (usec):
00:11:26.927       |  1.00th=[12387],  5.00th=[12649], 10.00th=[12911], 20.00th=[13173],
00:11:26.927       | 30.00th=[13304], 40.00th=[13435], 50.00th=[13566], 60.00th=[13698],
00:11:26.927       | 70.00th=[13960], 80.00th=[14222], 90.00th=[14746], 95.00th=[15008],
00:11:26.927       | 99.00th=[15795], 99.50th=[16188], 99.90th=[16450], 99.95th=[16909],
00:11:26.927       | 99.99th=[16909]
00:11:26.927    write: IOPS=4974, BW=19.4MiB/s (20.4MB/s)(19.5MiB/1004msec); 0 zone resets
00:11:26.927      slat (usec): min=2, max=2013, avg=99.06, stdev=252.82
00:11:26.927      clat (usec): min=3299, max=17115, avg=12783.35, stdev=1006.78
00:11:26.927       lat (usec): min=4212, max=17872, avg=12882.40, stdev=1005.26
00:11:26.927      clat percentiles (usec):
00:11:26.927       |  1.00th=[ 9503],  5.00th=[11600], 10.00th=[11994], 20.00th=[12256],
00:11:26.927       | 30.00th=[12387], 40.00th=[12518], 50.00th=[12780], 60.00th=[12911],
00:11:26.927       | 70.00th=[13173], 80.00th=[13566], 90.00th=[13829], 95.00th=[14091],
00:11:26.927       | 99.00th=[14746], 99.50th=[15270], 99.90th=[17171], 99.95th=[17171],
00:11:26.927       | 99.99th=[17171]
00:11:26.927     bw (  KiB/s): min=18456, max=20480, per=24.18%, avg=19468.00, stdev=1431.18, samples=2
00:11:26.927     iops        : min= 4614, max= 5120, avg=4867.00, stdev=357.80, samples=2
00:11:26.927    lat (msec)   : 4=0.01%, 10=0.61%, 20=99.38%
00:11:26.927    cpu          : usr=2.09%, sys=4.79%, ctx=1329, majf=0, minf=1
00:11:26.928    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3%
00:11:26.928       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:26.928       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:11:26.928       issued rwts: total=4608,4994,0,0 short=0,0,0,0 dropped=0,0,0,0
00:11:26.928       latency   : target=0, window=0, percentile=100.00%, depth=128
00:11:26.928  job1: (groupid=0, jobs=1): err= 0: pid=3214495: Sat Dec 14 13:37:26 2024
00:11:26.928    read: IOPS=4604, BW=18.0MiB/s (18.9MB/s)(18.1MiB/1005msec)
00:11:26.928      slat (usec): min=2, max=1073, avg=104.89, stdev=265.22
00:11:26.928      clat (usec): min=3567, max=15656, avg=13484.96, stdev=855.42
00:11:26.928       lat (usec): min=4438, max=15901, avg=13589.86, stdev=846.55
00:11:26.928      clat percentiles (usec):
00:11:26.928       |  1.00th=[12125],  5.00th=[12387], 10.00th=[12780], 20.00th=[13042],
00:11:26.928       | 30.00th=[13173], 40.00th=[13173], 50.00th=[13435], 60.00th=[13698],
00:11:26.928       | 70.00th=[13829], 80.00th=[14091], 90.00th=[14484], 95.00th=[14746],
00:11:26.928       | 99.00th=[15008], 99.50th=[15139], 99.90th=[15401], 99.95th=[15533],
00:11:26.928       | 99.99th=[15664]
00:11:26.928    write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets
00:11:26.928      slat (usec): min=2, max=2090, avg=97.02, stdev=249.18
00:11:26.928      clat (usec): min=6303, max=17223, avg=12615.00, stdev=812.24
00:11:26.928       lat (usec): min=6306, max=17227, avg=12712.02, stdev=810.76
00:11:26.928      clat percentiles (usec):
00:11:26.928       |  1.00th=[11207],  5.00th=[11469], 10.00th=[11863], 20.00th=[12125],
00:11:26.928       | 30.00th=[12256], 40.00th=[12387], 50.00th=[12518], 60.00th=[12780],
00:11:26.928       | 70.00th=[12911], 80.00th=[13304], 90.00th=[13566], 95.00th=[13829],
00:11:26.928       | 99.00th=[14484], 99.50th=[15008], 99.90th=[16450], 99.95th=[17171],
00:11:26.928       | 99.99th=[17171]
00:11:26.928     bw (  KiB/s): min=19624, max=20480, per=24.91%, avg=20052.00, stdev=605.28, samples=2
00:11:26.928     iops        : min= 4906, max= 5120, avg=5013.00, stdev=151.32, samples=2
00:11:26.928    lat (msec)   : 4=0.01%, 10=0.58%, 20=99.41%
00:11:26.928    cpu          : usr=1.59%, sys=5.38%, ctx=1442, majf=0, minf=1
00:11:26.928    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4%
00:11:26.928       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:26.928       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:11:26.928       issued rwts: total=4628,5120,0,0 short=0,0,0,0 dropped=0,0,0,0
00:11:26.928       latency   : target=0, window=0, percentile=100.00%, depth=128
00:11:26.928  job2: (groupid=0, jobs=1): err= 0: pid=3214496: Sat Dec 14 13:37:26 2024
00:11:26.928    read: IOPS=4615, BW=18.0MiB/s (18.9MB/s)(18.1MiB/1005msec)
00:11:26.928      slat (usec): min=2, max=1085, avg=104.19, stdev=264.41
00:11:26.928      clat (usec): min=3565, max=15859, avg=13478.47, stdev=901.76
00:11:26.928       lat (usec): min=4436, max=15883, avg=13582.66, stdev=896.49
00:11:26.928      clat percentiles (usec):
00:11:26.928       |  1.00th=[12125],  5.00th=[12518], 10.00th=[12780], 20.00th=[13042],
00:11:26.928       | 30.00th=[13173], 40.00th=[13173], 50.00th=[13435], 60.00th=[13698],
00:11:26.928       | 70.00th=[13829], 80.00th=[14091], 90.00th=[14484], 95.00th=[14746],
00:11:26.928       | 99.00th=[15008], 99.50th=[15139], 99.90th=[15664], 99.95th=[15664],
00:11:26.928       | 99.99th=[15795]
00:11:26.928    write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets
00:11:26.928      slat (usec): min=2, max=2046, avg=97.48, stdev=248.63
00:11:26.928      clat (usec): min=7180, max=17227, avg=12595.40, stdev=782.21
00:11:26.928       lat (usec): min=7184, max=17231, avg=12692.88, stdev=780.69
00:11:26.928      clat percentiles (usec):
00:11:26.928       |  1.00th=[11207],  5.00th=[11469], 10.00th=[11863], 20.00th=[12125],
00:11:26.928       | 30.00th=[12125], 40.00th=[12256], 50.00th=[12518], 60.00th=[12649],
00:11:26.928       | 70.00th=[12911], 80.00th=[13173], 90.00th=[13566], 95.00th=[13829],
00:11:26.928       | 99.00th=[14484], 99.50th=[14877], 99.90th=[17171], 99.95th=[17171],
00:11:26.928       | 99.99th=[17171]
00:11:26.928     bw (  KiB/s): min=19712, max=20480, per=24.96%, avg=20096.00, stdev=543.06, samples=2
00:11:26.928     iops        : min= 4928, max= 5120, avg=5024.00, stdev=135.76, samples=2
00:11:26.928    lat (msec)   : 4=0.01%, 10=0.60%, 20=99.39%
00:11:26.928    cpu          : usr=2.59%, sys=4.38%, ctx=1449, majf=0, minf=1
00:11:26.928    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4%
00:11:26.928       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:26.928       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:11:26.928       issued rwts: total=4639,5120,0,0 short=0,0,0,0 dropped=0,0,0,0
00:11:26.928       latency   : target=0, window=0, percentile=100.00%, depth=128
00:11:26.928  job3: (groupid=0, jobs=1): err= 0: pid=3214497: Sat Dec 14 13:37:26 2024
00:11:26.928    read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec)
00:11:26.928      slat (usec): min=2, max=1285, avg=105.87, stdev=269.50
00:11:26.928      clat (usec): min=12016, max=17081, avg=13729.20, stdev=718.22
00:11:26.928       lat (usec): min=12020, max=17105, avg=13835.06, stdev=715.63
00:11:26.928      clat percentiles (usec):
00:11:26.928       |  1.00th=[12387],  5.00th=[12649], 10.00th=[13042], 20.00th=[13173],
00:11:26.928       | 30.00th=[13304], 40.00th=[13435], 50.00th=[13566], 60.00th=[13698],
00:11:26.928       | 70.00th=[13960], 80.00th=[14353], 90.00th=[14746], 95.00th=[15008],
00:11:26.928       | 99.00th=[15795], 99.50th=[16057], 99.90th=[16319], 99.95th=[16909],
00:11:26.928       | 99.99th=[17171]
00:11:26.928    write: IOPS=4974, BW=19.4MiB/s (20.4MB/s)(19.5MiB/1004msec); 0 zone resets
00:11:26.928      slat (usec): min=2, max=2014, avg=98.98, stdev=252.93
00:11:26.928      clat (usec): min=3313, max=17121, avg=12756.07, stdev=998.58
00:11:26.928       lat (usec): min=4224, max=17125, avg=12855.05, stdev=994.59
00:11:26.928      clat percentiles (usec):
00:11:26.928       |  1.00th=[ 9503],  5.00th=[11600], 10.00th=[11994], 20.00th=[12256],
00:11:26.928       | 30.00th=[12387], 40.00th=[12518], 50.00th=[12780], 60.00th=[12911],
00:11:26.928       | 70.00th=[13173], 80.00th=[13566], 90.00th=[13829], 95.00th=[13960],
00:11:26.928       | 99.00th=[14615], 99.50th=[15008], 99.90th=[17171], 99.95th=[17171],
00:11:26.928       | 99.99th=[17171]
00:11:26.928     bw (  KiB/s): min=18456, max=20480, per=24.18%, avg=19468.00, stdev=1431.18, samples=2
00:11:26.928     iops        : min= 4614, max= 5120, avg=4867.00, stdev=357.80, samples=2
00:11:26.928    lat (msec)   : 4=0.01%, 10=0.57%, 20=99.42%
00:11:26.928    cpu          : usr=1.89%, sys=5.18%, ctx=1342, majf=0, minf=1
00:11:26.928    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3%
00:11:26.928       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:26.928       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:11:26.928       issued rwts: total=4608,4994,0,0 short=0,0,0,0 dropped=0,0,0,0
00:11:26.928       latency   : target=0, window=0, percentile=100.00%, depth=128
00:11:26.928  
00:11:26.928  Run status group 0 (all jobs):
00:11:26.928     READ: bw=71.8MiB/s (75.3MB/s), 17.9MiB/s-18.0MiB/s (18.8MB/s-18.9MB/s), io=72.2MiB (75.7MB), run=1004-1005msec
00:11:26.928    WRITE: bw=78.6MiB/s (82.4MB/s), 19.4MiB/s-19.9MiB/s (20.4MB/s-20.9MB/s), io=79.0MiB (82.9MB), run=1004-1005msec
00:11:26.928  
00:11:26.928  Disk stats (read/write):
00:11:26.928    nvme0n1: ios=3936/4096, merge=0/0, ticks=17618/17147, in_queue=34765, util=84.87%
00:11:26.928    nvme0n2: ios=4003/4096, merge=0/0, ticks=17807/16886, in_queue=34693, util=85.60%
00:11:26.928    nvme0n3: ios=4011/4096, merge=0/0, ticks=17847/16860, in_queue=34707, util=88.58%
00:11:26.928    nvme0n4: ios=3889/4096, merge=0/0, ticks=17622/17113, in_queue=34735, util=89.53%
00:11:26.928   13:37:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v
00:11:26.928  [global]
00:11:26.928  thread=1
00:11:26.928  invalidate=1
00:11:26.928  rw=randwrite
00:11:26.928  time_based=1
00:11:26.928  runtime=1
00:11:26.928  ioengine=libaio
00:11:26.928  direct=1
00:11:26.928  bs=4096
00:11:26.928  iodepth=128
00:11:26.928  norandommap=0
00:11:26.928  numjobs=1
00:11:26.928  
00:11:26.928  verify_dump=1
00:11:26.928  verify_backlog=512
00:11:26.928  verify_state_save=0
00:11:26.928  do_verify=1
00:11:26.928  verify=crc32c-intel
00:11:26.928  [job0]
00:11:26.928  filename=/dev/nvme0n1
00:11:26.928  [job1]
00:11:26.928  filename=/dev/nvme0n2
00:11:26.928  [job2]
00:11:26.928  filename=/dev/nvme0n3
00:11:26.928  [job3]
00:11:26.928  filename=/dev/nvme0n4
00:11:26.928  Could not set queue depth (nvme0n1)
00:11:26.928  Could not set queue depth (nvme0n2)
00:11:26.928  Could not set queue depth (nvme0n3)
00:11:26.928  Could not set queue depth (nvme0n4)
00:11:26.928  job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:11:26.928  job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:11:26.928  job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:11:26.928  job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:11:26.928  fio-3.35
00:11:26.928  Starting 4 threads
00:11:28.300  
00:11:28.300  job0: (groupid=0, jobs=1): err= 0: pid=3214913: Sat Dec 14 13:37:27 2024
00:11:28.300    read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec)
00:11:28.300      slat (usec): min=2, max=2049, avg=107.46, stdev=314.73
00:11:28.300      clat (usec): min=2866, max=19950, avg=13743.06, stdev=2425.23
00:11:28.300       lat (usec): min=2868, max=20473, avg=13850.52, stdev=2421.91
00:11:28.300      clat percentiles (usec):
00:11:28.300       |  1.00th=[ 6128],  5.00th=[11863], 10.00th=[12387], 20.00th=[12649],
00:11:28.300       | 30.00th=[12780], 40.00th=[12911], 50.00th=[13042], 60.00th=[13304],
00:11:28.300       | 70.00th=[13435], 80.00th=[13960], 90.00th=[18744], 95.00th=[19530],
00:11:28.300       | 99.00th=[19792], 99.50th=[19792], 99.90th=[19792], 99.95th=[19792],
00:11:28.300       | 99.99th=[20055]
00:11:28.300    write: IOPS=4600, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets
00:11:28.300      slat (usec): min=2, max=1879, avg=106.29, stdev=298.20
00:11:28.300      clat (usec): min=1414, max=19317, avg=13702.46, stdev=2283.62
00:11:28.300       lat (usec): min=2841, max=19668, avg=13808.74, stdev=2280.00
00:11:28.300      clat percentiles (usec):
00:11:28.300       |  1.00th=[11076],  5.00th=[11863], 10.00th=[11994], 20.00th=[12125],
00:11:28.300       | 30.00th=[12387], 40.00th=[12780], 50.00th=[12911], 60.00th=[13042],
00:11:28.300       | 70.00th=[13173], 80.00th=[17171], 90.00th=[17957], 95.00th=[17957],
00:11:28.300       | 99.00th=[18744], 99.50th=[18744], 99.90th=[19268], 99.95th=[19268],
00:11:28.300       | 99.99th=[19268]
00:11:28.300     bw (  KiB/s): min=16488, max=20376, per=23.14%, avg=18432.00, stdev=2749.23, samples=2
00:11:28.300     iops        : min= 4122, max= 5094, avg=4608.00, stdev=687.31, samples=2
00:11:28.300    lat (msec)   : 2=0.01%, 4=0.27%, 10=0.64%, 20=99.08%
00:11:28.300    cpu          : usr=2.40%, sys=3.29%, ctx=1966, majf=0, minf=1
00:11:28.300    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3%
00:11:28.300       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:28.300       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:11:28.300       issued rwts: total=4608,4614,0,0 short=0,0,0,0 dropped=0,0,0,0
00:11:28.300       latency   : target=0, window=0, percentile=100.00%, depth=128
00:11:28.300  job1: (groupid=0, jobs=1): err= 0: pid=3214914: Sat Dec 14 13:37:27 2024
00:11:28.300    read: IOPS=4590, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec)
00:11:28.300      slat (usec): min=2, max=1176, avg=109.65, stdev=267.66
00:11:28.300      clat (usec): min=2357, max=20482, avg=13902.34, stdev=2452.93
00:11:28.300       lat (usec): min=3000, max=20751, avg=14011.99, stdev=2455.42
00:11:28.300      clat percentiles (usec):
00:11:28.300       |  1.00th=[ 6980],  5.00th=[12125], 10.00th=[12387], 20.00th=[12780],
00:11:28.300       | 30.00th=[12911], 40.00th=[13042], 50.00th=[13304], 60.00th=[13435],
00:11:28.300       | 70.00th=[13698], 80.00th=[13960], 90.00th=[18744], 95.00th=[19530],
00:11:28.300       | 99.00th=[19792], 99.50th=[19792], 99.90th=[20055], 99.95th=[20055],
00:11:28.300       | 99.99th=[20579]
00:11:28.300    write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets
00:11:28.300      slat (usec): min=2, max=1410, avg=104.26, stdev=253.90
00:11:28.300      clat (usec): min=10985, max=19304, avg=13586.31, stdev=2307.28
00:11:28.300       lat (usec): min=11823, max=19315, avg=13690.57, stdev=2309.80
00:11:28.300      clat percentiles (usec):
00:11:28.300       |  1.00th=[11207],  5.00th=[11731], 10.00th=[11863], 20.00th=[12125],
00:11:28.300       | 30.00th=[12256], 40.00th=[12387], 50.00th=[12518], 60.00th=[12780],
00:11:28.300       | 70.00th=[13042], 80.00th=[17171], 90.00th=[17957], 95.00th=[17957],
00:11:28.300       | 99.00th=[18744], 99.50th=[18744], 99.90th=[19268], 99.95th=[19268],
00:11:28.300       | 99.99th=[19268]
00:11:28.300     bw (  KiB/s): min=16384, max=20480, per=23.14%, avg=18432.00, stdev=2896.31, samples=2
00:11:28.300     iops        : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2
00:11:28.300    lat (msec)   : 4=0.25%, 10=0.60%, 20=99.14%, 50=0.01%
00:11:28.300    cpu          : usr=1.90%, sys=3.89%, ctx=1862, majf=0, minf=1
00:11:28.300    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3%
00:11:28.300       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:28.300       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:11:28.300       issued rwts: total=4604,4608,0,0 short=0,0,0,0 dropped=0,0,0,0
00:11:28.300       latency   : target=0, window=0, percentile=100.00%, depth=128
00:11:28.300  job2: (groupid=0, jobs=1): err= 0: pid=3214915: Sat Dec 14 13:37:27 2024
00:11:28.300    read: IOPS=5904, BW=23.1MiB/s (24.2MB/s)(23.1MiB/1003msec)
00:11:28.300      slat (usec): min=2, max=1371, avg=84.34, stdev=239.81
00:11:28.300      clat (usec): min=2151, max=14889, avg=10881.63, stdev=2827.14
00:11:28.300       lat (usec): min=2992, max=14892, avg=10965.97, stdev=2840.98
00:11:28.300      clat percentiles (usec):
00:11:28.300       |  1.00th=[ 6325],  5.00th=[ 6718], 10.00th=[ 6915], 20.00th=[ 7111],
00:11:28.300       | 30.00th=[ 7635], 40.00th=[12125], 50.00th=[12649], 60.00th=[12911],
00:11:28.300       | 70.00th=[12911], 80.00th=[13173], 90.00th=[13304], 95.00th=[13435],
00:11:28.300       | 99.00th=[14222], 99.50th=[14222], 99.90th=[14877], 99.95th=[14877],
00:11:28.300       | 99.99th=[14877]
00:11:28.300    write: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec); 0 zone resets
00:11:28.300      slat (usec): min=2, max=1582, avg=76.67, stdev=220.64
00:11:28.300      clat (usec): min=4712, max=14015, avg=10174.26, stdev=2920.21
00:11:28.300       lat (usec): min=4722, max=14018, avg=10250.93, stdev=2937.86
00:11:28.300      clat percentiles (usec):
00:11:28.300       |  1.00th=[ 5866],  5.00th=[ 6325], 10.00th=[ 6456], 20.00th=[ 6718],
00:11:28.300       | 30.00th=[ 6915], 40.00th=[ 7635], 50.00th=[11994], 60.00th=[12256],
00:11:28.300       | 70.00th=[12518], 80.00th=[12911], 90.00th=[13173], 95.00th=[13304],
00:11:28.300       | 99.00th=[13698], 99.50th=[13960], 99.90th=[13960], 99.95th=[13960],
00:11:28.300       | 99.99th=[13960]
00:11:28.300     bw (  KiB/s): min=20480, max=28672, per=30.85%, avg=24576.00, stdev=5792.62, samples=2
00:11:28.300     iops        : min= 5120, max= 7168, avg=6144.00, stdev=1448.15, samples=2
00:11:28.300    lat (msec)   : 4=0.07%, 10=38.55%, 20=61.39%
00:11:28.300    cpu          : usr=2.59%, sys=4.89%, ctx=1463, majf=0, minf=2
00:11:28.300    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5%
00:11:28.300       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:28.300       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:11:28.300       issued rwts: total=5922,6144,0,0 short=0,0,0,0 dropped=0,0,0,0
00:11:28.300       latency   : target=0, window=0, percentile=100.00%, depth=128
00:11:28.300  job3: (groupid=0, jobs=1): err= 0: pid=3214916: Sat Dec 14 13:37:27 2024
00:11:28.300    read: IOPS=4587, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec)
00:11:28.300      slat (usec): min=2, max=1147, avg=109.79, stdev=266.40
00:11:28.300      clat (usec): min=2345, max=20411, avg=13906.04, stdev=2436.10
00:11:28.300       lat (usec): min=3007, max=20443, avg=14015.84, stdev=2438.02
00:11:28.300      clat percentiles (usec):
00:11:28.300       |  1.00th=[ 6980],  5.00th=[12125], 10.00th=[12387], 20.00th=[12780],
00:11:28.300       | 30.00th=[12911], 40.00th=[13042], 50.00th=[13304], 60.00th=[13435],
00:11:28.300       | 70.00th=[13698], 80.00th=[13960], 90.00th=[18744], 95.00th=[19530],
00:11:28.300       | 99.00th=[19792], 99.50th=[19792], 99.90th=[20055], 99.95th=[20317],
00:11:28.300       | 99.99th=[20317]
00:11:28.300    write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets
00:11:28.300      slat (usec): min=2, max=1406, avg=104.22, stdev=252.67
00:11:28.300      clat (usec): min=11005, max=19315, avg=13588.99, stdev=2309.98
00:11:28.300       lat (usec): min=11823, max=19675, avg=13693.21, stdev=2313.48
00:11:28.300      clat percentiles (usec):
00:11:28.300       |  1.00th=[11207],  5.00th=[11731], 10.00th=[11863], 20.00th=[12125],
00:11:28.300       | 30.00th=[12256], 40.00th=[12387], 50.00th=[12518], 60.00th=[12649],
00:11:28.300       | 70.00th=[13042], 80.00th=[17171], 90.00th=[17957], 95.00th=[18220],
00:11:28.300       | 99.00th=[18744], 99.50th=[18744], 99.90th=[19268], 99.95th=[19268],
00:11:28.300       | 99.99th=[19268]
00:11:28.300     bw (  KiB/s): min=16384, max=20480, per=23.14%, avg=18432.00, stdev=2896.31, samples=2
00:11:28.300     iops        : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2
00:11:28.300    lat (msec)   : 4=0.20%, 10=0.60%, 20=99.16%, 50=0.04%
00:11:28.300    cpu          : usr=2.40%, sys=3.39%, ctx=1855, majf=0, minf=1
00:11:28.300    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3%
00:11:28.300       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:28.300       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:11:28.300       issued rwts: total=4601,4608,0,0 short=0,0,0,0 dropped=0,0,0,0
00:11:28.301       latency   : target=0, window=0, percentile=100.00%, depth=128
00:11:28.301  
00:11:28.301  Run status group 0 (all jobs):
00:11:28.301     READ: bw=76.9MiB/s (80.6MB/s), 17.9MiB/s-23.1MiB/s (18.8MB/s-24.2MB/s), io=77.1MiB (80.8MB), run=1003-1003msec
00:11:28.301    WRITE: bw=77.8MiB/s (81.6MB/s), 17.9MiB/s-23.9MiB/s (18.8MB/s-25.1MB/s), io=78.0MiB (81.8MB), run=1003-1003msec
00:11:28.301  
00:11:28.301  Disk stats (read/write):
00:11:28.301    nvme0n1: ios=3633/3953, merge=0/0, ticks=12768/13620, in_queue=26388, util=84.75%
00:11:28.301    nvme0n2: ios=3584/3950, merge=0/0, ticks=12953/13474, in_queue=26427, util=85.39%
00:11:28.301    nvme0n3: ios=5120/5279, merge=0/0, ticks=13582/12481, in_queue=26063, util=88.37%
00:11:28.301    nvme0n4: ios=3584/3960, merge=0/0, ticks=12968/13571, in_queue=26539, util=89.41%
00:11:28.301   13:37:27 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync
00:11:28.301   13:37:27 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3215026
00:11:28.301   13:37:27 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10
00:11:28.301   13:37:27 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3
00:11:28.301  [global]
00:11:28.301  thread=1
00:11:28.301  invalidate=1
00:11:28.301  rw=read
00:11:28.301  time_based=1
00:11:28.301  runtime=10
00:11:28.301  ioengine=libaio
00:11:28.301  direct=1
00:11:28.301  bs=4096
00:11:28.301  iodepth=1
00:11:28.301  norandommap=1
00:11:28.301  numjobs=1
00:11:28.301  
00:11:28.301  [job0]
00:11:28.301  filename=/dev/nvme0n1
00:11:28.301  [job1]
00:11:28.301  filename=/dev/nvme0n2
00:11:28.301  [job2]
00:11:28.301  filename=/dev/nvme0n3
00:11:28.301  [job3]
00:11:28.301  filename=/dev/nvme0n4
00:11:28.301  Could not set queue depth (nvme0n1)
00:11:28.301  Could not set queue depth (nvme0n2)
00:11:28.301  Could not set queue depth (nvme0n3)
00:11:28.301  Could not set queue depth (nvme0n4)
00:11:28.866  job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:11:28.866  job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:11:28.866  job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:11:28.866  job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:11:28.866  fio-3.35
00:11:28.866  Starting 4 threads
00:11:31.390   13:37:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0
00:11:31.390  fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=64069632, buflen=4096
00:11:31.390  fio: pid=3215348, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported
00:11:31.390   13:37:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0
00:11:31.672  fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=68247552, buflen=4096
00:11:31.672  fio: pid=3215347, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported
00:11:31.672   13:37:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs
00:11:31.672   13:37:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0
00:11:31.953  fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=18567168, buflen=4096
00:11:31.953  fio: pid=3215345, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported
00:11:32.242   13:37:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs
00:11:32.242   13:37:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1
00:11:32.242  fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=31444992, buflen=4096
00:11:32.242  fio: pid=3215346, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported
00:11:32.500  
00:11:32.500  job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3215345: Sat Dec 14 13:37:31 2024
00:11:32.500    read: IOPS=6818, BW=26.6MiB/s (27.9MB/s)(81.7MiB/3068msec)
00:11:32.500      slat (usec): min=6, max=11947, avg=11.14, stdev=150.42
00:11:32.500      clat (usec): min=41, max=21520, avg=132.98, stdev=213.07
00:11:32.501       lat (usec): min=63, max=21529, avg=144.11, stdev=260.86
00:11:32.501      clat percentiles (usec):
00:11:32.501       |  1.00th=[   65],  5.00th=[   80], 10.00th=[   82], 20.00th=[   86],
00:11:32.501       | 30.00th=[   89], 40.00th=[   98], 50.00th=[  137], 60.00th=[  161],
00:11:32.501       | 70.00th=[  165], 80.00th=[  169], 90.00th=[  188], 95.00th=[  194],
00:11:32.501       | 99.00th=[  206], 99.50th=[  215], 99.90th=[  231], 99.95th=[  239],
00:11:32.501       | 99.99th=[  461]
00:11:32.501     bw (  KiB/s): min=22792, max=41248, per=30.20%, avg=27009.60, stdev=8031.81, samples=5
00:11:32.501     iops        : min= 5698, max=10312, avg=6752.40, stdev=2007.95, samples=5
00:11:32.501    lat (usec)   : 50=0.01%, 100=40.99%, 250=58.96%, 500=0.03%
00:11:32.501    lat (msec)   : 50=0.01%
00:11:32.501    cpu          : usr=3.78%, sys=9.13%, ctx=20923, majf=0, minf=1
00:11:32.501    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:11:32.501       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:32.501       complete  : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:32.501       issued rwts: total=20918,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:11:32.501       latency   : target=0, window=0, percentile=100.00%, depth=1
00:11:32.501  job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3215346: Sat Dec 14 13:37:31 2024
00:11:32.501    read: IOPS=6960, BW=27.2MiB/s (28.5MB/s)(94.0MiB/3457msec)
00:11:32.501      slat (usec): min=3, max=17594, avg=11.84, stdev=189.06
00:11:32.501      clat (usec): min=49, max=439, avg=129.44, stdev=48.97
00:11:32.501       lat (usec): min=52, max=17731, avg=141.28, stdev=195.09
00:11:32.501      clat percentiles (usec):
00:11:32.501       |  1.00th=[   58],  5.00th=[   61], 10.00th=[   64], 20.00th=[   70],
00:11:32.501       | 30.00th=[   85], 40.00th=[  104], 50.00th=[  149], 60.00th=[  163],
00:11:32.501       | 70.00th=[  167], 80.00th=[  174], 90.00th=[  188], 95.00th=[  192],
00:11:32.501       | 99.00th=[  206], 99.50th=[  215], 99.90th=[  233], 99.95th=[  237],
00:11:32.501       | 99.99th=[  338]
00:11:32.501     bw (  KiB/s): min=22568, max=35788, per=28.01%, avg=25050.00, stdev=5268.63, samples=6
00:11:32.501     iops        : min= 5642, max= 8947, avg=6262.50, stdev=1317.16, samples=6
00:11:32.501    lat (usec)   : 50=0.01%, 100=38.88%, 250=61.10%, 500=0.01%
00:11:32.501    cpu          : usr=3.24%, sys=9.90%, ctx=24069, majf=0, minf=2
00:11:32.501    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:11:32.501       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:32.501       complete  : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:32.501       issued rwts: total=24062,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:11:32.501       latency   : target=0, window=0, percentile=100.00%, depth=1
00:11:32.501  job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3215347: Sat Dec 14 13:37:31 2024
00:11:32.501    read: IOPS=5828, BW=22.8MiB/s (23.9MB/s)(65.1MiB/2859msec)
00:11:32.501      slat (usec): min=8, max=12939, avg=10.94, stdev=135.14
00:11:32.501      clat (usec): min=69, max=442, avg=157.94, stdev=26.22
00:11:32.501       lat (usec): min=77, max=13058, avg=168.87, stdev=137.24
00:11:32.501      clat percentiles (usec):
00:11:32.501       |  1.00th=[   89],  5.00th=[  102], 10.00th=[  120], 20.00th=[  137],
00:11:32.501       | 30.00th=[  157], 40.00th=[  161], 50.00th=[  163], 60.00th=[  167],
00:11:32.501       | 70.00th=[  169], 80.00th=[  176], 90.00th=[  186], 95.00th=[  194],
00:11:32.501       | 99.00th=[  219], 99.50th=[  223], 99.90th=[  235], 99.95th=[  241],
00:11:32.501       | 99.99th=[  383]
00:11:32.501     bw (  KiB/s): min=22552, max=23392, per=25.59%, avg=22886.40, stdev=307.94, samples=5
00:11:32.501     iops        : min= 5638, max= 5848, avg=5721.60, stdev=76.99, samples=5
00:11:32.501    lat (usec)   : 100=4.48%, 250=95.50%, 500=0.02%
00:11:32.501    cpu          : usr=2.80%, sys=8.68%, ctx=16665, majf=0, minf=2
00:11:32.501    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:11:32.501       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:32.501       complete  : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:32.501       issued rwts: total=16663,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:11:32.501       latency   : target=0, window=0, percentile=100.00%, depth=1
00:11:32.501  job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3215348: Sat Dec 14 13:37:31 2024
00:11:32.501    read: IOPS=5903, BW=23.1MiB/s (24.2MB/s)(61.1MiB/2650msec)
00:11:32.501      slat (nsec): min=8413, max=36848, avg=9516.77, stdev=1127.42
00:11:32.501      clat (usec): min=81, max=460, avg=157.00, stdev=29.46
00:11:32.501       lat (usec): min=93, max=469, avg=166.51, stdev=29.51
00:11:32.501      clat percentiles (usec):
00:11:32.501       |  1.00th=[   90],  5.00th=[   96], 10.00th=[  102], 20.00th=[  137],
00:11:32.501       | 30.00th=[  159], 40.00th=[  161], 50.00th=[  165], 60.00th=[  167],
00:11:32.501       | 70.00th=[  169], 80.00th=[  176], 90.00th=[  186], 95.00th=[  194],
00:11:32.501       | 99.00th=[  221], 99.50th=[  225], 99.90th=[  237], 99.95th=[  251],
00:11:32.501       | 99.99th=[  314]
00:11:32.501     bw (  KiB/s): min=22552, max=27432, per=26.50%, avg=23696.00, stdev=2092.15, samples=5
00:11:32.501     iops        : min= 5638, max= 6858, avg=5924.00, stdev=523.04, samples=5
00:11:32.501    lat (usec)   : 100=8.50%, 250=91.44%, 500=0.05%
00:11:32.501    cpu          : usr=2.83%, sys=8.61%, ctx=15643, majf=0, minf=2
00:11:32.501    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:11:32.501       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:32.501       complete  : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:32.501       issued rwts: total=15643,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:11:32.501       latency   : target=0, window=0, percentile=100.00%, depth=1
00:11:32.501  
00:11:32.501  Run status group 0 (all jobs):
00:11:32.501     READ: bw=87.3MiB/s (91.6MB/s), 22.8MiB/s-27.2MiB/s (23.9MB/s-28.5MB/s), io=302MiB (317MB), run=2650-3457msec
00:11:32.501  
00:11:32.501  Disk stats (read/write):
00:11:32.501    nvme0n1: ios=19099/0, merge=0/0, ticks=2449/0, in_queue=2449, util=94.19%
00:11:32.501    nvme0n2: ios=22855/0, merge=0/0, ticks=2852/0, in_queue=2852, util=94.19%
00:11:32.501    nvme0n3: ios=16662/0, merge=0/0, ticks=2472/0, in_queue=2472, util=95.62%
00:11:32.501    nvme0n4: ios=15386/0, merge=0/0, ticks=2260/0, in_queue=2260, util=96.46%
00:11:32.501   13:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs
00:11:32.501   13:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2
00:11:32.759   13:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs
00:11:32.759   13:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3
00:11:33.326   13:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs
00:11:33.326   13:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4
00:11:33.584   13:37:33 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs
00:11:33.584   13:37:33 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5
00:11:34.150   13:37:33 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs
00:11:34.150   13:37:33 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6
00:11:34.408   13:37:33 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0
00:11:34.408   13:37:33 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 3215026
00:11:34.408   13:37:33 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4
00:11:34.408   13:37:33 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:11:35.341  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:11:35.341   13:37:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:11:35.341   13:37:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0
00:11:35.341   13:37:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:11:35.341   13:37:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME
00:11:35.341   13:37:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:11:35.341   13:37:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME
00:11:35.341   13:37:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0
00:11:35.341   13:37:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']'
00:11:35.341   13:37:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected'
00:11:35.341  nvmf hotplug test: fio failed as expected
00:11:35.341   13:37:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:11:35.603   13:37:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state
00:11:35.603   13:37:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state
00:11:35.603   13:37:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state
00:11:35.603   13:37:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT
00:11:35.603   13:37:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini
00:11:35.603   13:37:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup
00:11:35.603   13:37:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync
00:11:35.603   13:37:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']'
00:11:35.603   13:37:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']'
00:11:35.603   13:37:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e
00:11:35.603   13:37:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20}
00:11:35.603   13:37:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma
00:11:35.603  rmmod nvme_rdma
00:11:35.603  rmmod nvme_fabrics
00:11:35.603   13:37:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:11:35.603   13:37:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e
00:11:35.603   13:37:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0
00:11:35.603   13:37:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3211856 ']'
00:11:35.603   13:37:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3211856
00:11:35.604   13:37:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 3211856 ']'
00:11:35.604   13:37:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 3211856
00:11:35.604    13:37:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname
00:11:35.604   13:37:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:11:35.604    13:37:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3211856
00:11:35.604   13:37:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:11:35.604   13:37:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:11:35.604   13:37:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3211856'
00:11:35.604  killing process with pid 3211856
00:11:35.604   13:37:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 3211856
00:11:35.604   13:37:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 3211856
00:11:37.508   13:37:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:11:37.508   13:37:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]]
00:11:37.508  
00:11:37.508  real	0m30.497s
00:11:37.508  user	2m19.963s
00:11:37.508  sys	0m10.406s
00:11:37.508   13:37:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:37.508   13:37:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x
00:11:37.508  ************************************
00:11:37.508  END TEST nvmf_fio_target
00:11:37.508  ************************************
00:11:37.508   13:37:36 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma
00:11:37.508   13:37:36 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:11:37.508   13:37:36 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:37.508   13:37:36 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x
00:11:37.508  ************************************
00:11:37.508  START TEST nvmf_bdevio
00:11:37.508  ************************************
00:11:37.508   13:37:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma
00:11:37.508  * Looking for test storage...
00:11:37.508  * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target
00:11:37.508    13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:11:37.508     13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version
00:11:37.508     13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:11:37.508    13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:11:37.508    13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:11:37.508    13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l
00:11:37.508    13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l
00:11:37.508    13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-:
00:11:37.508    13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1
00:11:37.508    13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-:
00:11:37.508    13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2
00:11:37.508    13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<'
00:11:37.508    13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2
00:11:37.508    13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1
00:11:37.508    13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:11:37.508    13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in
00:11:37.508    13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1
00:11:37.508    13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 ))
00:11:37.508    13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:11:37.508     13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1
00:11:37.508     13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1
00:11:37.508     13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:37.508     13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1
00:11:37.508    13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1
00:11:37.508     13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2
00:11:37.508     13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2
00:11:37.508     13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:11:37.508     13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2
00:11:37.508    13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2
00:11:37.508    13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:11:37.508    13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:11:37.508    13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0
00:11:37.508    13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:11:37.508    13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:11:37.508  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:37.508  		--rc genhtml_branch_coverage=1
00:11:37.508  		--rc genhtml_function_coverage=1
00:11:37.509  		--rc genhtml_legend=1
00:11:37.509  		--rc geninfo_all_blocks=1
00:11:37.509  		--rc geninfo_unexecuted_blocks=1
00:11:37.509  		
00:11:37.509  		'
00:11:37.509    13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:11:37.509  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:37.509  		--rc genhtml_branch_coverage=1
00:11:37.509  		--rc genhtml_function_coverage=1
00:11:37.509  		--rc genhtml_legend=1
00:11:37.509  		--rc geninfo_all_blocks=1
00:11:37.509  		--rc geninfo_unexecuted_blocks=1
00:11:37.509  		
00:11:37.509  		'
00:11:37.509    13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:11:37.509  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:37.509  		--rc genhtml_branch_coverage=1
00:11:37.509  		--rc genhtml_function_coverage=1
00:11:37.509  		--rc genhtml_legend=1
00:11:37.509  		--rc geninfo_all_blocks=1
00:11:37.509  		--rc geninfo_unexecuted_blocks=1
00:11:37.509  		
00:11:37.509  		'
00:11:37.509    13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:11:37.509  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:37.509  		--rc genhtml_branch_coverage=1
00:11:37.509  		--rc genhtml_function_coverage=1
00:11:37.509  		--rc genhtml_legend=1
00:11:37.509  		--rc geninfo_all_blocks=1
00:11:37.509  		--rc geninfo_unexecuted_blocks=1
00:11:37.509  		
00:11:37.509  		'
00:11:37.509   13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh
00:11:37.509     13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s
00:11:37.509    13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:11:37.509    13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:11:37.509    13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:11:37.509    13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:11:37.509    13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:11:37.509    13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:11:37.509    13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:11:37.509    13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:11:37.509    13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:11:37.509     13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:11:37.509    13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:11:37.509    13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e
00:11:37.509    13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:11:37.509    13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:11:37.509    13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:11:37.509    13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:11:37.509    13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh
00:11:37.509     13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob
00:11:37.509     13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:11:37.509     13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:11:37.509     13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:11:37.509      13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:37.509      13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:37.509      13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:37.509      13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH
00:11:37.509      13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:37.509    13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0
00:11:37.509    13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:11:37.509    13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:11:37.509    13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:11:37.509    13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:11:37.509    13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:11:37.509    13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:11:37.509  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:11:37.509    13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:11:37.509    13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:11:37.509    13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0
00:11:37.509   13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64
00:11:37.509   13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:11:37.509   13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit
00:11:37.509   13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z rdma ']'
00:11:37.509   13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:11:37.509   13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs
00:11:37.509   13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no
00:11:37.509   13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns
00:11:37.509   13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:11:37.509   13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:11:37.509    13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:11:37.509   13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:11:37.509   13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:11:37.509   13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable
00:11:37.509   13:37:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:11:45.618   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:11:45.618   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=()
00:11:45.618   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs
00:11:45.618   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=()
00:11:45.618   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:11:45.618   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=()
00:11:45.618   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers
00:11:45.618   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=()
00:11:45.618   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs
00:11:45.618   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=()
00:11:45.618   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810
00:11:45.618   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=()
00:11:45.618   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722
00:11:45.618   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=()
00:11:45.618   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx
00:11:45.618   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:11:45.618   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:11:45.618   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:11:45.618   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:11:45.618   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:11:45.618   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:11:45.618   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:11:45.618   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:11:45.618   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:11:45.618   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:11:45.618   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:11:45.618   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:11:45.618   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:11:45.618   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ rdma == rdma ]]
00:11:45.618   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}")
00:11:45.618   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}")
00:11:45.618   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]]
00:11:45.618   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}")
00:11:45.618   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:11:45.618   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:11:45.618   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)'
00:11:45.618  Found 0000:d9:00.0 (0x15b3 - 0x1015)
00:11:45.618   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:11:45.618   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:11:45.618   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:11:45.618   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:11:45.619   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:11:45.619   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:11:45.619   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:11:45.619   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)'
00:11:45.619  Found 0000:d9:00.1 (0x15b3 - 0x1015)
00:11:45.619   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:11:45.619   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:11:45.619   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:11:45.619   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:11:45.619   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:11:45.619   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:11:45.619   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:11:45.619   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]]
00:11:45.619   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:11:45.619   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:11:45.619   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:11:45.619   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:11:45.619   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:11:45.619   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0'
00:11:45.619  Found net devices under 0000:d9:00.0: mlx_0_0
00:11:45.619   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:11:45.619   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:11:45.619   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:11:45.619   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:11:45.619   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:11:45.619   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:11:45.619   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1'
00:11:45.619  Found net devices under 0000:d9:00.1: mlx_0_1
00:11:45.619   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:11:45.619   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:11:45.619   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes
00:11:45.619   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:11:45.619   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ rdma == tcp ]]
00:11:45.619   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@447 -- # [[ rdma == rdma ]]
00:11:45.619   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # rdma_device_init
00:11:45.619   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@529 -- # load_ib_rdma_modules
00:11:45.619    13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@62 -- # uname
00:11:45.619   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']'
00:11:45.619   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@66 -- # modprobe ib_cm
00:11:45.619   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@67 -- # modprobe ib_core
00:11:45.619   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@68 -- # modprobe ib_umad
00:11:45.619   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@69 -- # modprobe ib_uverbs
00:11:45.619   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@70 -- # modprobe iw_cm
00:11:45.619   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@71 -- # modprobe rdma_cm
00:11:45.619   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@72 -- # modprobe rdma_ucm
00:11:45.619   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@530 -- # allocate_nic_ips
00:11:45.619   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR ))
00:11:45.619    13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # get_rdma_if_list
00:11:45.619    13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:11:45.619    13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:11:45.619     13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:11:45.619     13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:11:45.619    13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:11:45.619    13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:11:45.619    13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:11:45.619    13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:11:45.619    13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_0
00:11:45.619    13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2
00:11:45.619    13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:11:45.619    13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:11:45.619    13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:11:45.619    13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:11:45.619    13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:11:45.619    13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_1
00:11:45.619    13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2
00:11:45.619   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:11:45.619    13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0
00:11:45.619    13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:11:45.619    13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:11:45.619    13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}'
00:11:45.619    13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1
00:11:45.619   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # ip=192.168.100.8
00:11:45.619   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]]
00:11:45.619   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@85 -- # ip addr show mlx_0_0
00:11:45.619  6: mlx_0_0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:11:45.619      link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff
00:11:45.619      altname enp217s0f0np0
00:11:45.619      altname ens818f0np0
00:11:45.619      inet 192.168.100.8/24 scope global mlx_0_0
00:11:45.619         valid_lft forever preferred_lft forever
00:11:45.619   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:11:45.619    13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1
00:11:45.619    13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:11:45.619    13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:11:45.619    13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}'
00:11:45.619    13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1
00:11:45.619   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # ip=192.168.100.9
00:11:45.619   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]]
00:11:45.619   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@85 -- # ip addr show mlx_0_1
00:11:45.619  7: mlx_0_1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:11:45.619      link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff
00:11:45.619      altname enp217s0f1np1
00:11:45.619      altname ens818f1np1
00:11:45.619      inet 192.168.100.9/24 scope global mlx_0_1
00:11:45.619         valid_lft forever preferred_lft forever
00:11:45.619   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0
00:11:45.619   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:11:45.619   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma'
00:11:45.619   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]]
00:11:45.619    13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # get_available_rdma_ips
00:11:45.619     13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # get_rdma_if_list
00:11:45.619     13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:11:45.619     13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:11:45.619      13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:11:45.619      13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:11:45.619     13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:11:45.619     13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:11:45.619     13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:11:45.619     13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:11:45.619     13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_0
00:11:45.619     13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2
00:11:45.619     13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:11:45.619     13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:11:45.619     13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:11:45.619     13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:11:45.619     13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:11:45.619     13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_1
00:11:45.619     13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2
00:11:45.619    13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:11:45.619    13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0
00:11:45.619    13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:11:45.619    13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:11:45.620    13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}'
00:11:45.620    13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1
00:11:45.620    13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:11:45.620    13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1
00:11:45.620    13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:11:45.620    13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:11:45.620    13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}'
00:11:45.620    13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1
00:11:45.620   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8
00:11:45.620  192.168.100.9'
00:11:45.620    13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # echo '192.168.100.8
00:11:45.620  192.168.100.9'
00:11:45.620    13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # head -n 1
00:11:45.620   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8
00:11:45.620    13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # echo '192.168.100.8
00:11:45.620  192.168.100.9'
00:11:45.620    13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # tail -n +2
00:11:45.620    13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # head -n 1
00:11:45.620   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9
00:11:45.620   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']'
00:11:45.620   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024'
00:11:45.620   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' rdma == tcp ']'
00:11:45.620   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' rdma == rdma ']'
00:11:45.620   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-rdma
00:11:45.620   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78
00:11:45.620   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:11:45.620   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable
00:11:45.620   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:11:45.620   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3220147
00:11:45.620   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3220147
00:11:45.620   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78
00:11:45.620   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3220147 ']'
00:11:45.620   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:11:45.620   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100
00:11:45.620   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:11:45.620  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:11:45.620   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable
00:11:45.620   13:37:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:11:45.620  [2024-12-14 13:37:44.364927] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:11:45.620  [2024-12-14 13:37:44.365024] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:11:45.620  [2024-12-14 13:37:44.497072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:11:45.620  [2024-12-14 13:37:44.599107] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:11:45.620  [2024-12-14 13:37:44.599159] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:11:45.620  [2024-12-14 13:37:44.599172] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:11:45.620  [2024-12-14 13:37:44.599185] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:11:45.620  [2024-12-14 13:37:44.599194] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:11:45.620  [2024-12-14 13:37:44.601737] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4
00:11:45.620  [2024-12-14 13:37:44.601831] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5
00:11:45.620  [2024-12-14 13:37:44.601899] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:11:45.620  [2024-12-14 13:37:44.601925] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6
00:11:45.620   13:37:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:11:45.620   13:37:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0
00:11:45.620   13:37:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:11:45.620   13:37:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable
00:11:45.620   13:37:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:11:45.620   13:37:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:11:45.620   13:37:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192
00:11:45.620   13:37:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:45.620   13:37:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:11:45.620  [2024-12-14 13:37:45.274609] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028540/0x7f28cbba4940) succeed.
00:11:45.620  [2024-12-14 13:37:45.284682] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000286c0/0x7f28cbb5f940) succeed.
00:11:45.877   13:37:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:45.877   13:37:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0
00:11:45.877   13:37:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:45.877   13:37:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:11:46.134  Malloc0
00:11:46.134   13:37:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:46.134   13:37:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:11:46.134   13:37:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:46.134   13:37:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:11:46.134   13:37:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:46.134   13:37:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:11:46.134   13:37:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:46.134   13:37:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:11:46.134   13:37:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:46.134   13:37:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420
00:11:46.134   13:37:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:46.134   13:37:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:11:46.134  [2024-12-14 13:37:45.645152] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 ***
00:11:46.134   13:37:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:46.134   13:37:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62
00:11:46.134    13:37:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json
00:11:46.134    13:37:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=()
00:11:46.134    13:37:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config
00:11:46.134    13:37:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:11:46.134    13:37:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:11:46.134  {
00:11:46.134    "params": {
00:11:46.134      "name": "Nvme$subsystem",
00:11:46.134      "trtype": "$TEST_TRANSPORT",
00:11:46.134      "traddr": "$NVMF_FIRST_TARGET_IP",
00:11:46.134      "adrfam": "ipv4",
00:11:46.134      "trsvcid": "$NVMF_PORT",
00:11:46.134      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:11:46.134      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:11:46.134      "hdgst": ${hdgst:-false},
00:11:46.134      "ddgst": ${ddgst:-false}
00:11:46.134    },
00:11:46.134    "method": "bdev_nvme_attach_controller"
00:11:46.134  }
00:11:46.134  EOF
00:11:46.134  )")
00:11:46.134     13:37:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat
00:11:46.134    13:37:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq .
00:11:46.134     13:37:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=,
00:11:46.134     13:37:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:11:46.134    "params": {
00:11:46.134      "name": "Nvme1",
00:11:46.134      "trtype": "rdma",
00:11:46.134      "traddr": "192.168.100.8",
00:11:46.134      "adrfam": "ipv4",
00:11:46.134      "trsvcid": "4420",
00:11:46.134      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:11:46.134      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:11:46.134      "hdgst": false,
00:11:46.134      "ddgst": false
00:11:46.134    },
00:11:46.134    "method": "bdev_nvme_attach_controller"
00:11:46.134  }'
00:11:46.134  [2024-12-14 13:37:45.733094] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:11:46.134  [2024-12-14 13:37:45.733179] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3220433 ]
00:11:46.134  [2024-12-14 13:37:45.863078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:11:46.391  [2024-12-14 13:37:45.970269] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:11:46.391  [2024-12-14 13:37:45.970336] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:11:46.391  [2024-12-14 13:37:45.970340] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:11:46.648  I/O targets:
00:11:46.648    Nvme1n1: 131072 blocks of 512 bytes (64 MiB)
00:11:46.648  
00:11:46.648  
00:11:46.649       CUnit - A unit testing framework for C - Version 2.1-3
00:11:46.649       http://cunit.sourceforge.net/
00:11:46.649  
00:11:46.649  
00:11:46.649  Suite: bdevio tests on: Nvme1n1
00:11:46.906    Test: blockdev write read block ...passed
00:11:46.906    Test: blockdev write zeroes read block ...passed
00:11:46.906    Test: blockdev write zeroes read no split ...passed
00:11:46.906    Test: blockdev write zeroes read split ...passed
00:11:46.906    Test: blockdev write zeroes read split partial ...passed
00:11:46.906    Test: blockdev reset ...[2024-12-14 13:37:46.472170] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller
00:11:46.906  [2024-12-14 13:37:46.507978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0
00:11:46.906  [2024-12-14 13:37:46.541779] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful.
00:11:46.906  passed
00:11:46.906    Test: blockdev write read 8 blocks ...passed
00:11:46.906    Test: blockdev write read size > 128k ...passed
00:11:46.906    Test: blockdev write read invalid size ...passed
00:11:46.906    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:11:46.906    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:11:46.906    Test: blockdev write read max offset ...passed
00:11:46.906    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:11:46.906    Test: blockdev writev readv 8 blocks ...passed
00:11:46.906    Test: blockdev writev readv 30 x 1block ...passed
00:11:46.906    Test: blockdev writev readv block ...passed
00:11:46.906    Test: blockdev writev readv size > 128k ...passed
00:11:46.906    Test: blockdev writev readv size > 128k in two iovs ...passed
00:11:46.906    Test: blockdev comparev and writev ...[2024-12-14 13:37:46.547233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:11:46.906  [2024-12-14 13:37:46.547270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:11:46.906  [2024-12-14 13:37:46.547287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:11:46.906  [2024-12-14 13:37:46.547303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:11:46.906  [2024-12-14 13:37:46.547511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:11:46.906  [2024-12-14 13:37:46.547529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0
00:11:46.906  [2024-12-14 13:37:46.547543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:11:46.906  [2024-12-14 13:37:46.547557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0
00:11:46.906  [2024-12-14 13:37:46.547737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:11:46.906  [2024-12-14 13:37:46.547756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0
00:11:46.906  [2024-12-14 13:37:46.547770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:11:46.906  [2024-12-14 13:37:46.547784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0
00:11:46.906  [2024-12-14 13:37:46.547984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:11:46.906  [2024-12-14 13:37:46.548005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0
00:11:46.906  [2024-12-14 13:37:46.548018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:11:46.906  [2024-12-14 13:37:46.548034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0
00:11:46.906  passed
00:11:46.906    Test: blockdev nvme passthru rw ...passed
00:11:46.906    Test: blockdev nvme passthru vendor specific ...[2024-12-14 13:37:46.548363] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:11:46.906  [2024-12-14 13:37:46.548385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0
00:11:46.906  [2024-12-14 13:37:46.548445] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:11:46.906  [2024-12-14 13:37:46.548461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0
00:11:46.906  [2024-12-14 13:37:46.548516] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:11:46.906  [2024-12-14 13:37:46.548532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0
00:11:46.906  [2024-12-14 13:37:46.548590] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:11:46.906  [2024-12-14 13:37:46.548606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0
00:11:46.906  passed
00:11:46.907    Test: blockdev nvme admin passthru ...passed
00:11:46.907    Test: blockdev copy ...passed
00:11:46.907  
00:11:46.907  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:11:46.907                suites      1      1    n/a      0        0
00:11:46.907                 tests     23     23     23      0        0
00:11:46.907               asserts    152    152    152      0      n/a
00:11:46.907  
00:11:46.907  Elapsed time =    0.397 seconds
00:11:47.839   13:37:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:11:47.839   13:37:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:47.839   13:37:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:11:47.839   13:37:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:47.839   13:37:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT
00:11:47.839   13:37:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini
00:11:47.839   13:37:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup
00:11:47.839   13:37:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync
00:11:47.839   13:37:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' rdma == tcp ']'
00:11:47.839   13:37:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' rdma == rdma ']'
00:11:47.839   13:37:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e
00:11:47.839   13:37:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20}
00:11:47.839   13:37:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma
00:11:47.839  rmmod nvme_rdma
00:11:47.839  rmmod nvme_fabrics
00:11:47.839   13:37:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:11:47.839   13:37:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e
00:11:47.839   13:37:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0
00:11:47.839   13:37:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3220147 ']'
00:11:47.839   13:37:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3220147
00:11:47.839   13:37:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3220147 ']'
00:11:47.839   13:37:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3220147
00:11:47.839    13:37:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname
00:11:47.839   13:37:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:11:47.839    13:37:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3220147
00:11:48.097   13:37:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3
00:11:48.097   13:37:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']'
00:11:48.097   13:37:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3220147'
00:11:48.097  killing process with pid 3220147
00:11:48.097   13:37:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3220147
00:11:48.097   13:37:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3220147
00:11:49.995   13:37:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:11:49.995   13:37:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]]
00:11:49.995  
00:11:49.995  real	0m12.489s
00:11:49.995  user	0m23.395s
00:11:49.995  sys	0m6.412s
00:11:49.995   13:37:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:49.995   13:37:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:11:49.995  ************************************
00:11:49.995  END TEST nvmf_bdevio
00:11:49.995  ************************************
00:11:49.995   13:37:49 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT
00:11:49.995  
00:11:49.995  real	4m41.192s
00:11:49.995  user	12m28.831s
00:11:49.995  sys	1m41.391s
00:11:49.995   13:37:49 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:49.995   13:37:49 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x
00:11:49.995  ************************************
00:11:49.995  END TEST nvmf_target_core
00:11:49.995  ************************************
00:11:49.995   13:37:49 nvmf_rdma -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma
00:11:49.995   13:37:49 nvmf_rdma -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:11:49.995   13:37:49 nvmf_rdma -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:49.995   13:37:49 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x
00:11:49.995  ************************************
00:11:49.995  START TEST nvmf_target_extra
00:11:49.995  ************************************
00:11:49.995   13:37:49 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma
00:11:49.995  * Looking for test storage...
00:11:49.995  * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf
00:11:49.995    13:37:49 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:11:49.995     13:37:49 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version
00:11:49.995     13:37:49 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:11:49.995    13:37:49 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:11:49.996    13:37:49 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:11:49.996    13:37:49 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l
00:11:49.996    13:37:49 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l
00:11:49.996    13:37:49 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-:
00:11:49.996    13:37:49 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1
00:11:49.996    13:37:49 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-:
00:11:49.996    13:37:49 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2
00:11:49.996    13:37:49 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<'
00:11:49.996    13:37:49 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2
00:11:49.996    13:37:49 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1
00:11:49.996    13:37:49 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:11:49.996    13:37:49 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in
00:11:49.996    13:37:49 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@345 -- # : 1
00:11:49.996    13:37:49 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 ))
00:11:49.996    13:37:49 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:11:50.255     13:37:49 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1
00:11:50.255     13:37:49 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1
00:11:50.255     13:37:49 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:50.255     13:37:49 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1
00:11:50.255    13:37:49 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1
00:11:50.255     13:37:49 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2
00:11:50.255     13:37:49 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2
00:11:50.255     13:37:49 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:11:50.255     13:37:49 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2
00:11:50.255    13:37:49 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2
00:11:50.255    13:37:49 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:11:50.255    13:37:49 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:11:50.255    13:37:49 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@368 -- # return 0
00:11:50.255    13:37:49 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:11:50.255    13:37:49 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:11:50.255  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:50.255  		--rc genhtml_branch_coverage=1
00:11:50.255  		--rc genhtml_function_coverage=1
00:11:50.255  		--rc genhtml_legend=1
00:11:50.255  		--rc geninfo_all_blocks=1
00:11:50.255  		--rc geninfo_unexecuted_blocks=1
00:11:50.255  		
00:11:50.255  		'
00:11:50.255    13:37:49 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:11:50.255  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:50.255  		--rc genhtml_branch_coverage=1
00:11:50.255  		--rc genhtml_function_coverage=1
00:11:50.255  		--rc genhtml_legend=1
00:11:50.255  		--rc geninfo_all_blocks=1
00:11:50.255  		--rc geninfo_unexecuted_blocks=1
00:11:50.255  		
00:11:50.255  		'
00:11:50.255    13:37:49 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:11:50.255  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:50.255  		--rc genhtml_branch_coverage=1
00:11:50.255  		--rc genhtml_function_coverage=1
00:11:50.255  		--rc genhtml_legend=1
00:11:50.255  		--rc geninfo_all_blocks=1
00:11:50.255  		--rc geninfo_unexecuted_blocks=1
00:11:50.255  		
00:11:50.255  		'
00:11:50.255    13:37:49 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:11:50.255  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:50.255  		--rc genhtml_branch_coverage=1
00:11:50.255  		--rc genhtml_function_coverage=1
00:11:50.255  		--rc genhtml_legend=1
00:11:50.255  		--rc geninfo_all_blocks=1
00:11:50.255  		--rc geninfo_unexecuted_blocks=1
00:11:50.255  		
00:11:50.255  		'
00:11:50.255   13:37:49 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh
00:11:50.255     13:37:49 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s
00:11:50.255    13:37:49 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:11:50.255    13:37:49 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:11:50.255    13:37:49 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:11:50.255    13:37:49 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:11:50.255    13:37:49 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:11:50.255    13:37:49 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:11:50.255    13:37:49 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:11:50.255    13:37:49 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:11:50.255    13:37:49 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:11:50.255     13:37:49 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:11:50.255    13:37:49 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:11:50.255    13:37:49 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e
00:11:50.255    13:37:49 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:11:50.255    13:37:49 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:11:50.255    13:37:49 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:11:50.255    13:37:49 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:11:50.255    13:37:49 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh
00:11:50.255     13:37:49 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob
00:11:50.255     13:37:49 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:11:50.255     13:37:49 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:11:50.255     13:37:49 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:11:50.255      13:37:49 nvmf_rdma.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:50.255      13:37:49 nvmf_rdma.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:50.255      13:37:49 nvmf_rdma.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:50.255      13:37:49 nvmf_rdma.nvmf_target_extra -- paths/export.sh@5 -- # export PATH
00:11:50.255      13:37:49 nvmf_rdma.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:50.255    13:37:49 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0
00:11:50.255    13:37:49 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:11:50.255    13:37:49 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:11:50.255    13:37:49 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:11:50.255    13:37:49 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:11:50.255    13:37:49 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:11:50.255    13:37:49 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:11:50.255  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:11:50.255    13:37:49 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:11:50.255    13:37:49 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:11:50.255    13:37:49 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0
00:11:50.255   13:37:49 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT
00:11:50.255   13:37:49 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@")
00:11:50.255   13:37:49 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]]
00:11:50.255   13:37:49 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma
00:11:50.255   13:37:49 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:11:50.255   13:37:49 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:50.255   13:37:49 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:11:50.255  ************************************
00:11:50.255  START TEST nvmf_example
00:11:50.255  ************************************
00:11:50.255   13:37:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma
00:11:50.255  * Looking for test storage...
00:11:50.255  * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target
00:11:50.255    13:37:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:11:50.255     13:37:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version
00:11:50.256     13:37:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:11:50.514    13:37:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:11:50.514    13:37:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:11:50.514    13:37:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l
00:11:50.514    13:37:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l
00:11:50.514    13:37:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-:
00:11:50.514    13:37:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1
00:11:50.514    13:37:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-:
00:11:50.515    13:37:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2
00:11:50.515    13:37:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<'
00:11:50.515    13:37:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2
00:11:50.515    13:37:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1
00:11:50.515    13:37:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:11:50.515    13:37:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in
00:11:50.515    13:37:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1
00:11:50.515    13:37:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 ))
00:11:50.515    13:37:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:11:50.515     13:37:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1
00:11:50.515     13:37:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1
00:11:50.515     13:37:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:50.515     13:37:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1
00:11:50.515    13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1
00:11:50.515     13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2
00:11:50.515     13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2
00:11:50.515     13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:11:50.515     13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2
00:11:50.515    13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2
00:11:50.515    13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:11:50.515    13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:11:50.515    13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0
00:11:50.515    13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:11:50.515    13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:11:50.515  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:50.515  		--rc genhtml_branch_coverage=1
00:11:50.515  		--rc genhtml_function_coverage=1
00:11:50.515  		--rc genhtml_legend=1
00:11:50.515  		--rc geninfo_all_blocks=1
00:11:50.515  		--rc geninfo_unexecuted_blocks=1
00:11:50.515  		
00:11:50.515  		'
00:11:50.515    13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:11:50.515  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:50.515  		--rc genhtml_branch_coverage=1
00:11:50.515  		--rc genhtml_function_coverage=1
00:11:50.515  		--rc genhtml_legend=1
00:11:50.515  		--rc geninfo_all_blocks=1
00:11:50.515  		--rc geninfo_unexecuted_blocks=1
00:11:50.515  		
00:11:50.515  		'
00:11:50.515    13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:11:50.515  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:50.515  		--rc genhtml_branch_coverage=1
00:11:50.515  		--rc genhtml_function_coverage=1
00:11:50.515  		--rc genhtml_legend=1
00:11:50.515  		--rc geninfo_all_blocks=1
00:11:50.515  		--rc geninfo_unexecuted_blocks=1
00:11:50.515  		
00:11:50.515  		'
00:11:50.515    13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:11:50.515  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:50.515  		--rc genhtml_branch_coverage=1
00:11:50.515  		--rc genhtml_function_coverage=1
00:11:50.515  		--rc genhtml_legend=1
00:11:50.515  		--rc geninfo_all_blocks=1
00:11:50.515  		--rc geninfo_unexecuted_blocks=1
00:11:50.515  		
00:11:50.515  		'
00:11:50.515   13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh
00:11:50.515     13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s
00:11:50.515    13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:11:50.515    13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:11:50.515    13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:11:50.515    13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:11:50.515    13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:11:50.515    13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:11:50.515    13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:11:50.515    13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:11:50.515    13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:11:50.515     13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:11:50.515    13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:11:50.515    13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e
00:11:50.515    13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:11:50.515    13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:11:50.515    13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:11:50.515    13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:11:50.515    13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh
00:11:50.515     13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob
00:11:50.515     13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:11:50.515     13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:11:50.515     13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:11:50.515      13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:50.515      13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:50.515      13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:50.515      13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH
00:11:50.515      13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:50.515    13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0
00:11:50.515    13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:11:50.515    13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:11:50.515    13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:11:50.515    13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:11:50.515    13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:11:50.515    13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:11:50.515  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:11:50.515    13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:11:50.515    13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:11:50.515    13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0
00:11:50.515   13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf")
00:11:50.515   13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64
00:11:50.515   13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512
00:11:50.515   13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args
00:11:50.515   13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']'
00:11:50.515   13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000)
00:11:50.515   13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}")
00:11:50.515   13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test
00:11:50.515   13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable
00:11:50.515   13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x
00:11:50.515   13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit
00:11:50.515   13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z rdma ']'
00:11:50.515   13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:11:50.515   13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs
00:11:50.515   13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no
00:11:50.516   13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns
00:11:50.516   13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:11:50.516   13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:11:50.516    13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:11:50.516   13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:11:50.516   13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:11:50.516   13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable
00:11:50.516   13:37:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=()
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=()
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=()
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=()
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=()
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=()
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=()
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ rdma == rdma ]]
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}")
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}")
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]]
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}")
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)'
00:11:58.627  Found 0000:d9:00.0 (0x15b3 - 0x1015)
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)'
00:11:58.627  Found 0000:d9:00.1 (0x15b3 - 0x1015)
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]]
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0'
00:11:58.627  Found net devices under 0000:d9:00.0: mlx_0_0
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1'
00:11:58.627  Found net devices under 0000:d9:00.1: mlx_0_1
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ rdma == tcp ]]
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@447 -- # [[ rdma == rdma ]]
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # rdma_device_init
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@529 -- # load_ib_rdma_modules
00:11:58.627    13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@62 -- # uname
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']'
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@66 -- # modprobe ib_cm
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@67 -- # modprobe ib_core
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@68 -- # modprobe ib_umad
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@69 -- # modprobe ib_uverbs
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@70 -- # modprobe iw_cm
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@71 -- # modprobe rdma_cm
00:11:58.627   13:37:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@72 -- # modprobe rdma_ucm
00:11:58.627   13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@530 -- # allocate_nic_ips
00:11:58.627   13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR ))
00:11:58.627    13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # get_rdma_if_list
00:11:58.627    13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:11:58.627    13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:11:58.627     13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:11:58.627     13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:11:58.627    13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:11:58.627    13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:11:58.627    13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:11:58.627    13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:11:58.627    13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_0
00:11:58.627    13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2
00:11:58.627    13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:11:58.627    13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:11:58.627    13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:11:58.627    13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:11:58.627    13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:11:58.627    13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_1
00:11:58.627    13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2
00:11:58.627   13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:11:58.627    13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0
00:11:58.627    13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:11:58.627    13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:11:58.627    13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1
00:11:58.627    13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}'
00:11:58.627   13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # ip=192.168.100.8
00:11:58.628   13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]]
00:11:58.628   13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@85 -- # ip addr show mlx_0_0
00:11:58.628  6: mlx_0_0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:11:58.628      link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff
00:11:58.628      altname enp217s0f0np0
00:11:58.628      altname ens818f0np0
00:11:58.628      inet 192.168.100.8/24 scope global mlx_0_0
00:11:58.628         valid_lft forever preferred_lft forever
00:11:58.628   13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:11:58.628    13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1
00:11:58.628    13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:11:58.628    13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:11:58.628    13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}'
00:11:58.628    13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1
00:11:58.628   13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # ip=192.168.100.9
00:11:58.628   13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]]
00:11:58.628   13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@85 -- # ip addr show mlx_0_1
00:11:58.628  7: mlx_0_1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:11:58.628      link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff
00:11:58.628      altname enp217s0f1np1
00:11:58.628      altname ens818f1np1
00:11:58.628      inet 192.168.100.9/24 scope global mlx_0_1
00:11:58.628         valid_lft forever preferred_lft forever
00:11:58.628   13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0
00:11:58.628   13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:11:58.628   13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma'
00:11:58.628   13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]]
00:11:58.628    13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@484 -- # get_available_rdma_ips
00:11:58.628     13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # get_rdma_if_list
00:11:58.628     13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:11:58.628     13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:11:58.628      13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:11:58.628      13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:11:58.628     13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:11:58.628     13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:11:58.628     13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:11:58.628     13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:11:58.628     13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_0
00:11:58.628     13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2
00:11:58.628     13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:11:58.628     13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:11:58.628     13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:11:58.628     13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:11:58.628     13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:11:58.628     13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_1
00:11:58.628     13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2
00:11:58.628    13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:11:58.628    13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0
00:11:58.628    13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:11:58.628    13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:11:58.628    13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}'
00:11:58.628    13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1
00:11:58.628    13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:11:58.628    13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1
00:11:58.628    13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:11:58.628    13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:11:58.628    13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1
00:11:58.628    13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}'
00:11:58.628   13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8
00:11:58.628  192.168.100.9'
00:11:58.628    13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@485 -- # echo '192.168.100.8
00:11:58.628  192.168.100.9'
00:11:58.628    13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@485 -- # head -n 1
00:11:58.628   13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8
00:11:58.628    13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # echo '192.168.100.8
00:11:58.628  192.168.100.9'
00:11:58.628    13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # head -n 1
00:11:58.628    13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # tail -n +2
00:11:58.628   13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9
00:11:58.628   13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']'
00:11:58.628   13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024'
00:11:58.628   13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' rdma == tcp ']'
00:11:58.628   13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' rdma == rdma ']'
00:11:58.628   13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-rdma
00:11:58.628   13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF'
00:11:58.628   13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example
00:11:58.628   13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable
00:11:58.628   13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x
00:11:58.628   13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']'
00:11:58.628   13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3224474
00:11:58.628   13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF
00:11:58.628   13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:11:58.628   13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3224474
00:11:58.628   13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 3224474 ']'
00:11:58.628   13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:11:58.628   13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100
00:11:58.628   13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:11:58.628  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:11:58.628   13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable
00:11:58.628   13:37:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x
00:11:58.628   13:37:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:11:58.628   13:37:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0
00:11:58.628   13:37:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example
00:11:58.628   13:37:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable
00:11:58.628   13:37:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x
00:11:58.628   13:37:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192
00:11:58.628   13:37:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:58.628   13:37:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x
00:11:58.886   13:37:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:58.886    13:37:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512
00:11:58.886    13:37:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:58.886    13:37:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x
00:11:58.886    13:37:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:58.886   13:37:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 '
00:11:58.886   13:37:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:11:58.887   13:37:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:58.887   13:37:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x
00:11:58.887   13:37:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:58.887   13:37:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs
00:11:58.887   13:37:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:11:58.887   13:37:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:58.887   13:37:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x
00:11:58.887   13:37:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:58.887   13:37:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420
00:11:58.887   13:37:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:58.887   13:37:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x
00:11:58.887   13:37:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:58.887   13:37:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf
00:11:58.887   13:37:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1'
00:12:11.084  Initializing NVMe Controllers
00:12:11.084  Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1
00:12:11.084  Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0
00:12:11.084  Initialization complete. Launching workers.
00:12:11.084  ========================================================
00:12:11.084                                                                                                                     Latency(us)
00:12:11.084  Device Information                                                             :       IOPS      MiB/s    Average        min        max
00:12:11.084  RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  0:   22559.48      88.12    2836.53     750.69   12141.19
00:12:11.084  ========================================================
00:12:11.085  Total                                                                          :   22559.48      88.12    2836.53     750.69   12141.19
00:12:11.085  
00:12:11.085   13:38:09 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT
00:12:11.085   13:38:09 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini
00:12:11.085   13:38:09 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup
00:12:11.085   13:38:09 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync
00:12:11.085   13:38:09 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' rdma == tcp ']'
00:12:11.085   13:38:09 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' rdma == rdma ']'
00:12:11.085   13:38:09 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e
00:12:11.085   13:38:09 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20}
00:12:11.085   13:38:09 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma
00:12:11.085  rmmod nvme_rdma
00:12:11.085  rmmod nvme_fabrics
00:12:11.085   13:38:10 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:12:11.085   13:38:10 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e
00:12:11.085   13:38:10 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0
00:12:11.085   13:38:10 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 3224474 ']'
00:12:11.085   13:38:10 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 3224474
00:12:11.085   13:38:10 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 3224474 ']'
00:12:11.085   13:38:10 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 3224474
00:12:11.085    13:38:10 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname
00:12:11.085   13:38:10 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:12:11.085    13:38:10 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3224474
00:12:11.085   13:38:10 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf
00:12:11.085   13:38:10 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']'
00:12:11.085   13:38:10 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3224474'
00:12:11.085  killing process with pid 3224474
00:12:11.085   13:38:10 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 3224474
00:12:11.085   13:38:10 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 3224474
00:12:12.456  nvmf threads initialize successfully
00:12:12.456  bdev subsystem init successfully
00:12:12.456  created a nvmf target service
00:12:12.456  create targets's poll groups done
00:12:12.456  all subsystems of target started
00:12:12.456  nvmf target is running
00:12:12.456  all subsystems of target stopped
00:12:12.456  destroy targets's poll groups done
00:12:12.456  destroyed the nvmf target service
00:12:12.456  bdev subsystem finish successfully
00:12:12.456  nvmf threads destroy successfully
00:12:12.456   13:38:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:12:12.456   13:38:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]]
00:12:12.456   13:38:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test
00:12:12.456   13:38:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable
00:12:12.456   13:38:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x
00:12:12.456  
00:12:12.456  real	0m22.129s
00:12:12.456  user	0m58.633s
00:12:12.456  sys	0m6.206s
00:12:12.456   13:38:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:12.456   13:38:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x
00:12:12.456  ************************************
00:12:12.456  END TEST nvmf_example
00:12:12.456  ************************************
00:12:12.456   13:38:11 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma
00:12:12.456   13:38:11 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:12:12.456   13:38:11 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:12.456   13:38:11 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:12:12.456  ************************************
00:12:12.456  START TEST nvmf_filesystem
00:12:12.456  ************************************
00:12:12.456   13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma
00:12:12.456  * Looking for test storage...
00:12:12.456  * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target
00:12:12.456     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:12:12.456      13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version
00:12:12.456      13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:12:12.456     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:12:12.456     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:12:12.456     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l
00:12:12.456     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l
00:12:12.456     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-:
00:12:12.456     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1
00:12:12.456     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-:
00:12:12.456     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2
00:12:12.456     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<'
00:12:12.456     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2
00:12:12.456     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1
00:12:12.456     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:12:12.456     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in
00:12:12.456     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1
00:12:12.456     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 ))
00:12:12.456     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:12:12.456      13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1
00:12:12.456      13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1
00:12:12.456      13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:12.456      13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1
00:12:12.716     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1
00:12:12.716      13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2
00:12:12.717      13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2
00:12:12.717      13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:12:12.717      13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:12:12.717  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:12.717  		--rc genhtml_branch_coverage=1
00:12:12.717  		--rc genhtml_function_coverage=1
00:12:12.717  		--rc genhtml_legend=1
00:12:12.717  		--rc geninfo_all_blocks=1
00:12:12.717  		--rc geninfo_unexecuted_blocks=1
00:12:12.717  		
00:12:12.717  		'
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:12:12.717  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:12.717  		--rc genhtml_branch_coverage=1
00:12:12.717  		--rc genhtml_function_coverage=1
00:12:12.717  		--rc genhtml_legend=1
00:12:12.717  		--rc geninfo_all_blocks=1
00:12:12.717  		--rc geninfo_unexecuted_blocks=1
00:12:12.717  		
00:12:12.717  		'
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:12:12.717  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:12.717  		--rc genhtml_branch_coverage=1
00:12:12.717  		--rc genhtml_function_coverage=1
00:12:12.717  		--rc genhtml_legend=1
00:12:12.717  		--rc geninfo_all_blocks=1
00:12:12.717  		--rc geninfo_unexecuted_blocks=1
00:12:12.717  		
00:12:12.717  		'
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:12:12.717  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:12.717  		--rc genhtml_branch_coverage=1
00:12:12.717  		--rc genhtml_function_coverage=1
00:12:12.717  		--rc genhtml_legend=1
00:12:12.717  		--rc geninfo_all_blocks=1
00:12:12.717  		--rc geninfo_unexecuted_blocks=1
00:12:12.717  		
00:12:12.717  		'
00:12:12.717   13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh
00:12:12.717    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd
00:12:12.717    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e
00:12:12.717    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob
00:12:12.717    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob
00:12:12.717    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit
00:12:12.717    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']'
00:12:12.717    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]]
00:12:12.717    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR=
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=y
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR=
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR=
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH=
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH=
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR=
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB=
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR=
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH=
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR=
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y
00:12:12.717     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y
00:12:12.718     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH=
00:12:12.718     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n
00:12:12.718     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n
00:12:12.718     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n
00:12:12.718     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y
00:12:12.718     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n
00:12:12.718     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y
00:12:12.718     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y
00:12:12.718     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n
00:12:12.718     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128
00:12:12.718     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n
00:12:12.718     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR=
00:12:12.718     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y
00:12:12.718     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n
00:12:12.718     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX=
00:12:12.718     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y
00:12:12.718     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n
00:12:12.718    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh
00:12:12.718       13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh
00:12:12.718      13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common
00:12:12.718     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common
00:12:12.718     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk
00:12:12.718     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin
00:12:12.718     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app
00:12:12.718     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples
00:12:12.718     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz")
00:12:12.718     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt")
00:12:12.718     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt")
00:12:12.718     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost")
00:12:12.718     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd")
00:12:12.718     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt")
00:12:12.718     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]]
00:12:12.718     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H
00:12:12.718  #define SPDK_CONFIG_H
00:12:12.718  #define SPDK_CONFIG_AIO_FSDEV 1
00:12:12.718  #define SPDK_CONFIG_APPS 1
00:12:12.718  #define SPDK_CONFIG_ARCH native
00:12:12.718  #define SPDK_CONFIG_ASAN 1
00:12:12.718  #undef SPDK_CONFIG_AVAHI
00:12:12.718  #undef SPDK_CONFIG_CET
00:12:12.718  #define SPDK_CONFIG_COPY_FILE_RANGE 1
00:12:12.718  #define SPDK_CONFIG_COVERAGE 1
00:12:12.718  #define SPDK_CONFIG_CROSS_PREFIX 
00:12:12.718  #undef SPDK_CONFIG_CRYPTO
00:12:12.718  #undef SPDK_CONFIG_CRYPTO_MLX5
00:12:12.718  #undef SPDK_CONFIG_CUSTOMOCF
00:12:12.718  #undef SPDK_CONFIG_DAOS
00:12:12.718  #define SPDK_CONFIG_DAOS_DIR 
00:12:12.718  #define SPDK_CONFIG_DEBUG 1
00:12:12.718  #undef SPDK_CONFIG_DPDK_COMPRESSDEV
00:12:12.718  #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build
00:12:12.718  #define SPDK_CONFIG_DPDK_INC_DIR 
00:12:12.718  #define SPDK_CONFIG_DPDK_LIB_DIR 
00:12:12.718  #undef SPDK_CONFIG_DPDK_PKG_CONFIG
00:12:12.718  #undef SPDK_CONFIG_DPDK_UADK
00:12:12.718  #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk
00:12:12.718  #define SPDK_CONFIG_EXAMPLES 1
00:12:12.718  #undef SPDK_CONFIG_FC
00:12:12.718  #define SPDK_CONFIG_FC_PATH 
00:12:12.718  #define SPDK_CONFIG_FIO_PLUGIN 1
00:12:12.718  #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio
00:12:12.718  #define SPDK_CONFIG_FSDEV 1
00:12:12.718  #undef SPDK_CONFIG_FUSE
00:12:12.718  #undef SPDK_CONFIG_FUZZER
00:12:12.718  #define SPDK_CONFIG_FUZZER_LIB 
00:12:12.718  #undef SPDK_CONFIG_GOLANG
00:12:12.718  #define SPDK_CONFIG_HAVE_ARC4RANDOM 1
00:12:12.718  #define SPDK_CONFIG_HAVE_EVP_MAC 1
00:12:12.718  #define SPDK_CONFIG_HAVE_EXECINFO_H 1
00:12:12.718  #define SPDK_CONFIG_HAVE_KEYUTILS 1
00:12:12.718  #undef SPDK_CONFIG_HAVE_LIBARCHIVE
00:12:12.718  #undef SPDK_CONFIG_HAVE_LIBBSD
00:12:12.718  #undef SPDK_CONFIG_HAVE_LZ4
00:12:12.718  #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1
00:12:12.718  #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC
00:12:12.718  #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1
00:12:12.718  #define SPDK_CONFIG_IDXD 1
00:12:12.718  #define SPDK_CONFIG_IDXD_KERNEL 1
00:12:12.718  #undef SPDK_CONFIG_IPSEC_MB
00:12:12.718  #define SPDK_CONFIG_IPSEC_MB_DIR 
00:12:12.718  #define SPDK_CONFIG_ISAL 1
00:12:12.718  #define SPDK_CONFIG_ISAL_CRYPTO 1
00:12:12.718  #define SPDK_CONFIG_ISCSI_INITIATOR 1
00:12:12.718  #define SPDK_CONFIG_LIBDIR 
00:12:12.718  #undef SPDK_CONFIG_LTO
00:12:12.718  #define SPDK_CONFIG_MAX_LCORES 128
00:12:12.718  #define SPDK_CONFIG_MAX_NUMA_NODES 1
00:12:12.718  #define SPDK_CONFIG_NVME_CUSE 1
00:12:12.718  #undef SPDK_CONFIG_OCF
00:12:12.718  #define SPDK_CONFIG_OCF_PATH 
00:12:12.718  #define SPDK_CONFIG_OPENSSL_PATH 
00:12:12.718  #undef SPDK_CONFIG_PGO_CAPTURE
00:12:12.718  #define SPDK_CONFIG_PGO_DIR 
00:12:12.718  #undef SPDK_CONFIG_PGO_USE
00:12:12.718  #define SPDK_CONFIG_PREFIX /usr/local
00:12:12.718  #undef SPDK_CONFIG_RAID5F
00:12:12.718  #undef SPDK_CONFIG_RBD
00:12:12.718  #define SPDK_CONFIG_RDMA 1
00:12:12.718  #define SPDK_CONFIG_RDMA_PROV verbs
00:12:12.718  #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1
00:12:12.718  #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1
00:12:12.718  #define SPDK_CONFIG_RDMA_SET_TOS 1
00:12:12.718  #define SPDK_CONFIG_SHARED 1
00:12:12.718  #undef SPDK_CONFIG_SMA
00:12:12.718  #define SPDK_CONFIG_TESTS 1
00:12:12.718  #undef SPDK_CONFIG_TSAN
00:12:12.718  #define SPDK_CONFIG_UBLK 1
00:12:12.718  #define SPDK_CONFIG_UBSAN 1
00:12:12.718  #undef SPDK_CONFIG_UNIT_TESTS
00:12:12.718  #undef SPDK_CONFIG_URING
00:12:12.718  #define SPDK_CONFIG_URING_PATH 
00:12:12.718  #undef SPDK_CONFIG_URING_ZNS
00:12:12.718  #undef SPDK_CONFIG_USDT
00:12:12.718  #undef SPDK_CONFIG_VBDEV_COMPRESS
00:12:12.718  #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5
00:12:12.718  #undef SPDK_CONFIG_VFIO_USER
00:12:12.718  #define SPDK_CONFIG_VFIO_USER_DIR 
00:12:12.718  #define SPDK_CONFIG_VHOST 1
00:12:12.718  #define SPDK_CONFIG_VIRTIO 1
00:12:12.718  #undef SPDK_CONFIG_VTUNE
00:12:12.718  #define SPDK_CONFIG_VTUNE_DIR 
00:12:12.718  #define SPDK_CONFIG_WERROR 1
00:12:12.718  #define SPDK_CONFIG_WPDK_DIR 
00:12:12.718  #undef SPDK_CONFIG_XNVME
00:12:12.718  #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]]
00:12:12.718     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS ))
00:12:12.718    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh
00:12:12.718     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob
00:12:12.718     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:12:12.718     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:12:12.718     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:12:12.718      13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:12.718      13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:12.718      13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:12.718      13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH
00:12:12.718      13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:12.718    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common
00:12:12.719       13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common
00:12:12.719      13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm
00:12:12.719     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm
00:12:12.719      13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../
00:12:12.719     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk
00:12:12.719     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A
00:12:12.719     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name
00:12:12.719     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power
00:12:12.719      13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s
00:12:12.719     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux
00:12:12.719     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=()
00:12:12.719     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO
00:12:12.719     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1
00:12:12.719     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0
00:12:12.719     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0
00:12:12.719     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0
00:12:12.719     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]=
00:12:12.719     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E'
00:12:12.719     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat)
00:12:12.719     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]]
00:12:12.719     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]]
00:12:12.719     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]]
00:12:12.719     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]]
00:12:12.719     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp)
00:12:12.719     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm)
00:12:12.719     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power ]]
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # :
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : rdma
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # :
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL
00:12:12.719    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # :
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : mlx5
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # :
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']'
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples
00:12:12.720    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV=
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]]
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]]
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh'
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]=
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt=
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']'
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind=
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind=
00:12:12.721     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']'
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j112
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=()
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE=
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@"
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=rdma
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 3227199 ]]
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 3227199
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]]
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates
00:12:12.721     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.McDeyv
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback")
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]]
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]]
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.McDeyv/tests/target /tmp/spdk.McDeyv
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:12:12.721     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T
00:12:12.721     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=422735872
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=4861693952
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=55532826624
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=61730603008
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6197776384
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30850506752
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30865301504
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=14794752
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs
00:12:12.721    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs
00:12:12.722    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12323033088
00:12:12.722    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12346122240
00:12:12.722    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23089152
00:12:12.722    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:12:12.722    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs
00:12:12.722    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs
00:12:12.722    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30864900096
00:12:12.722    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30865301504
00:12:12.722    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=401408
00:12:12.722    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:12:12.722    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs
00:12:12.722    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs
00:12:12.722    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6173044736
00:12:12.722    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6173057024
00:12:12.722    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288
00:12:12.722    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:12:12.722    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n'
00:12:12.722  * Looking for test storage...
00:12:12.722    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size
00:12:12.722    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}"
00:12:12.722     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target
00:12:12.722     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}'
00:12:12.722    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/
00:12:12.722    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=55532826624
00:12:12.722    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size ))
00:12:12.722    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size ))
00:12:12.722    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]]
00:12:12.722    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]]
00:12:12.722    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]]
00:12:12.722    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=8412368896
00:12:12.722    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 ))
00:12:12.722    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target
00:12:12.722    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target
00:12:12.722    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target
00:12:12.722  * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target
00:12:12.722    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0
00:12:12.722    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace
00:12:12.722    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug
00:12:12.722    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR
00:12:12.722    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ '
00:12:12.722    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true
00:12:12.722    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd
00:12:12.722    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]]
00:12:12.722    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]]
00:12:12.722    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec
00:12:12.722    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec
00:12:12.722    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore
00:12:12.722    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]'
00:12:12.722    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 ))
00:12:12.722    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x
00:12:12.722    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:12:12.722     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version
00:12:12.722     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:12:12.722    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:12:12.722    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:12:12.722    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l
00:12:12.722    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l
00:12:12.722    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-:
00:12:12.722    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1
00:12:12.722    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-:
00:12:12.722    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2
00:12:12.722    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<'
00:12:12.722    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2
00:12:12.722    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1
00:12:12.722    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:12:12.722    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in
00:12:12.722    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1
00:12:12.722    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 ))
00:12:12.722    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:12:12.722     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1
00:12:12.722     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1
00:12:12.722     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:12.722     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1
00:12:12.722    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1
00:12:12.980     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2
00:12:12.980     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2
00:12:12.980     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:12:12.980     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2
00:12:12.980    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2
00:12:12.980    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:12:12.980    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:12:12.980    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0
00:12:12.980    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:12:12.980    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:12:12.980  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:12.980  		--rc genhtml_branch_coverage=1
00:12:12.980  		--rc genhtml_function_coverage=1
00:12:12.980  		--rc genhtml_legend=1
00:12:12.980  		--rc geninfo_all_blocks=1
00:12:12.980  		--rc geninfo_unexecuted_blocks=1
00:12:12.980  		
00:12:12.980  		'
00:12:12.980    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:12:12.980  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:12.980  		--rc genhtml_branch_coverage=1
00:12:12.980  		--rc genhtml_function_coverage=1
00:12:12.980  		--rc genhtml_legend=1
00:12:12.980  		--rc geninfo_all_blocks=1
00:12:12.980  		--rc geninfo_unexecuted_blocks=1
00:12:12.980  		
00:12:12.980  		'
00:12:12.980    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:12:12.980  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:12.980  		--rc genhtml_branch_coverage=1
00:12:12.980  		--rc genhtml_function_coverage=1
00:12:12.980  		--rc genhtml_legend=1
00:12:12.980  		--rc geninfo_all_blocks=1
00:12:12.980  		--rc geninfo_unexecuted_blocks=1
00:12:12.980  		
00:12:12.980  		'
00:12:12.980    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:12:12.980  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:12.980  		--rc genhtml_branch_coverage=1
00:12:12.980  		--rc genhtml_function_coverage=1
00:12:12.980  		--rc genhtml_legend=1
00:12:12.980  		--rc geninfo_all_blocks=1
00:12:12.980  		--rc geninfo_unexecuted_blocks=1
00:12:12.980  		
00:12:12.980  		'
00:12:12.980   13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh
00:12:12.980     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s
00:12:12.980    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:12:12.980    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:12:12.980    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:12:12.980    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:12:12.980    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:12:12.980    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:12:12.980    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:12:12.980    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:12:12.980    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:12:12.980     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:12:12.980    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:12:12.980    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e
00:12:12.980    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:12:12.980    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:12:12.981    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:12:12.981    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:12:12.981    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh
00:12:12.981     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob
00:12:12.981     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:12:12.981     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:12:12.981     13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:12:12.981      13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:12.981      13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:12.981      13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:12.981      13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH
00:12:12.981      13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:12.981    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0
00:12:12.981    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:12:12.981    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:12:12.981    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:12:12.981    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:12:12.981    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:12:12.981    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:12:12.981  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:12:12.981    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:12:12.981    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:12:12.981    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0
00:12:12.981   13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512
00:12:12.981   13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512
00:12:12.981   13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit
00:12:12.981   13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z rdma ']'
00:12:12.981   13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:12:12.981   13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs
00:12:12.981   13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no
00:12:12.981   13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns
00:12:12.981   13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:12:12.981   13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:12:12.981    13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:12:12.981   13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:12:12.981   13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:12:12.981   13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable
00:12:12.981   13:38:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=()
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=()
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=()
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=()
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=()
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=()
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=()
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ rdma == rdma ]]
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}")
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}")
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]]
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}")
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)'
00:12:19.540  Found 0000:d9:00.0 (0x15b3 - 0x1015)
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)'
00:12:19.540  Found 0000:d9:00.1 (0x15b3 - 0x1015)
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]]
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0'
00:12:19.540  Found net devices under 0000:d9:00.0: mlx_0_0
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1'
00:12:19.540  Found net devices under 0000:d9:00.1: mlx_0_1
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ rdma == tcp ]]
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@447 -- # [[ rdma == rdma ]]
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # rdma_device_init
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@529 -- # load_ib_rdma_modules
00:12:19.540    13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@62 -- # uname
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']'
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@66 -- # modprobe ib_cm
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@67 -- # modprobe ib_core
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@68 -- # modprobe ib_umad
00:12:19.540   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@69 -- # modprobe ib_uverbs
00:12:19.541   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@70 -- # modprobe iw_cm
00:12:19.541   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@71 -- # modprobe rdma_cm
00:12:19.541   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@72 -- # modprobe rdma_ucm
00:12:19.541   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@530 -- # allocate_nic_ips
00:12:19.541   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR ))
00:12:19.541    13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # get_rdma_if_list
00:12:19.541    13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:12:19.541    13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:12:19.541     13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:12:19.541     13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:12:19.799    13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:12:19.799    13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:12:19.799    13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:12:19.799    13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:12:19.799    13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_0
00:12:19.799    13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2
00:12:19.799    13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:12:19.799    13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:12:19.799    13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:12:19.799    13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:12:19.799    13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:12:19.799    13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_1
00:12:19.799    13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2
00:12:19.799   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:12:19.799    13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0
00:12:19.799    13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:12:19.799    13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:12:19.799    13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}'
00:12:19.799    13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1
00:12:19.799   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # ip=192.168.100.8
00:12:19.799   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]]
00:12:19.799   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_0
00:12:19.799  6: mlx_0_0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:12:19.799      link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff
00:12:19.799      altname enp217s0f0np0
00:12:19.799      altname ens818f0np0
00:12:19.799      inet 192.168.100.8/24 scope global mlx_0_0
00:12:19.799         valid_lft forever preferred_lft forever
00:12:19.799   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:12:19.799    13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1
00:12:19.799    13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:12:19.799    13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:12:19.799    13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}'
00:12:19.799    13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1
00:12:19.799   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # ip=192.168.100.9
00:12:19.799   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]]
00:12:19.799   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_1
00:12:19.799  7: mlx_0_1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:12:19.799      link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff
00:12:19.799      altname enp217s0f1np1
00:12:19.799      altname ens818f1np1
00:12:19.799      inet 192.168.100.9/24 scope global mlx_0_1
00:12:19.799         valid_lft forever preferred_lft forever
00:12:19.799   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0
00:12:19.799   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:12:19.800   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma'
00:12:19.800   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]]
00:12:19.800    13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@484 -- # get_available_rdma_ips
00:12:19.800     13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # get_rdma_if_list
00:12:19.800     13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:12:19.800     13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:12:19.800      13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:12:19.800      13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:12:19.800     13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:12:19.800     13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:12:19.800     13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:12:19.800     13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:12:19.800     13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_0
00:12:19.800     13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2
00:12:19.800     13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:12:19.800     13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:12:19.800     13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:12:19.800     13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:12:19.800     13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:12:19.800     13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_1
00:12:19.800     13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2
00:12:19.800    13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:12:19.800    13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0
00:12:19.800    13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:12:19.800    13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:12:19.800    13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}'
00:12:19.800    13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1
00:12:19.800    13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:12:19.800    13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1
00:12:19.800    13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:12:19.800    13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:12:19.800    13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}'
00:12:19.800    13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1
00:12:19.800   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8
00:12:19.800  192.168.100.9'
00:12:19.800    13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@485 -- # echo '192.168.100.8
00:12:19.800  192.168.100.9'
00:12:19.800    13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@485 -- # head -n 1
00:12:19.800   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8
00:12:19.800    13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # echo '192.168.100.8
00:12:19.800  192.168.100.9'
00:12:19.800    13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # tail -n +2
00:12:19.800    13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # head -n 1
00:12:19.800   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9
00:12:19.800   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']'
00:12:19.800   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024'
00:12:19.800   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' rdma == tcp ']'
00:12:19.800   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' rdma == rdma ']'
00:12:19.800   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-rdma
00:12:19.800   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0
00:12:19.800   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:12:19.800   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:19.800   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x
00:12:19.800  ************************************
00:12:19.800  START TEST nvmf_filesystem_no_in_capsule
00:12:19.800  ************************************
00:12:19.800   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0
00:12:19.800   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0
00:12:19.800   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF
00:12:19.800   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:12:19.800   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable
00:12:19.800   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:12:19.800   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3230474
00:12:19.800   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3230474
00:12:19.800   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:12:19.800   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 3230474 ']'
00:12:19.800   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:12:19.800   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100
00:12:19.800   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:12:19.800  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:12:19.800   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable
00:12:19.800   13:38:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:12:20.058  [2024-12-14 13:38:19.588040] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:12:20.058  [2024-12-14 13:38:19.588147] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:12:20.058  [2024-12-14 13:38:19.729178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:12:20.316  [2024-12-14 13:38:19.838959] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:12:20.316  [2024-12-14 13:38:19.839001] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:12:20.316  [2024-12-14 13:38:19.839013] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:12:20.316  [2024-12-14 13:38:19.839025] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:12:20.316  [2024-12-14 13:38:19.839034] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:12:20.316  [2024-12-14 13:38:19.841362] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:12:20.316  [2024-12-14 13:38:19.841380] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:12:20.316  [2024-12-14 13:38:19.841474] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:12:20.316  [2024-12-14 13:38:19.841487] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:12:20.881   13:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:12:20.881   13:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0
00:12:20.881   13:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:12:20.881   13:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable
00:12:20.881   13:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:12:20.881   13:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:12:20.881   13:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1
00:12:20.881   13:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0
00:12:20.881   13:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:20.881   13:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:12:20.881  [2024-12-14 13:38:20.440719] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16
00:12:20.881  [2024-12-14 13:38:20.480408] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028540/0x7efd27b1d940) succeed.
00:12:20.881  [2024-12-14 13:38:20.489888] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000286c0/0x7efd279bd940) succeed.
00:12:21.139   13:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:21.139   13:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1
00:12:21.139   13:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:21.139   13:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:12:21.397  Malloc1
00:12:21.397   13:38:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:21.397   13:38:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME
00:12:21.397   13:38:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:21.397   13:38:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:12:21.397   13:38:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:21.397   13:38:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1
00:12:21.397   13:38:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:21.397   13:38:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:12:21.654   13:38:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:21.654   13:38:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420
00:12:21.654   13:38:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:21.654   13:38:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:12:21.654  [2024-12-14 13:38:21.139741] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 ***
00:12:21.654   13:38:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:21.654    13:38:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1
00:12:21.654    13:38:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1
00:12:21.654    13:38:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info
00:12:21.654    13:38:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs
00:12:21.654    13:38:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb
00:12:21.654     13:38:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1
00:12:21.654     13:38:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:21.654     13:38:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:12:21.654     13:38:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:21.654    13:38:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[
00:12:21.654  {
00:12:21.654  "name": "Malloc1",
00:12:21.654  "aliases": [
00:12:21.654  "d7cf5515-2915-450b-9112-c56786761610"
00:12:21.654  ],
00:12:21.654  "product_name": "Malloc disk",
00:12:21.654  "block_size": 512,
00:12:21.654  "num_blocks": 1048576,
00:12:21.654  "uuid": "d7cf5515-2915-450b-9112-c56786761610",
00:12:21.654  "assigned_rate_limits": {
00:12:21.654  "rw_ios_per_sec": 0,
00:12:21.654  "rw_mbytes_per_sec": 0,
00:12:21.654  "r_mbytes_per_sec": 0,
00:12:21.654  "w_mbytes_per_sec": 0
00:12:21.654  },
00:12:21.654  "claimed": true,
00:12:21.654  "claim_type": "exclusive_write",
00:12:21.654  "zoned": false,
00:12:21.654  "supported_io_types": {
00:12:21.654  "read": true,
00:12:21.654  "write": true,
00:12:21.654  "unmap": true,
00:12:21.654  "flush": true,
00:12:21.654  "reset": true,
00:12:21.654  "nvme_admin": false,
00:12:21.654  "nvme_io": false,
00:12:21.654  "nvme_io_md": false,
00:12:21.654  "write_zeroes": true,
00:12:21.654  "zcopy": true,
00:12:21.654  "get_zone_info": false,
00:12:21.654  "zone_management": false,
00:12:21.654  "zone_append": false,
00:12:21.654  "compare": false,
00:12:21.654  "compare_and_write": false,
00:12:21.654  "abort": true,
00:12:21.654  "seek_hole": false,
00:12:21.654  "seek_data": false,
00:12:21.654  "copy": true,
00:12:21.654  "nvme_iov_md": false
00:12:21.654  },
00:12:21.654  "memory_domains": [
00:12:21.654  {
00:12:21.654  "dma_device_id": "system",
00:12:21.654  "dma_device_type": 1
00:12:21.654  },
00:12:21.654  {
00:12:21.654  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:12:21.654  "dma_device_type": 2
00:12:21.654  }
00:12:21.654  ],
00:12:21.654  "driver_specific": {}
00:12:21.654  }
00:12:21.654  ]'
00:12:21.654     13:38:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:12:21.654    13:38:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512
00:12:21.654     13:38:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:12:21.654    13:38:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576
00:12:21.654    13:38:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512
00:12:21.654    13:38:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512
00:12:21.654   13:38:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912
00:12:21.654   13:38:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420
00:12:22.587   13:38:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME
00:12:22.587   13:38:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0
00:12:22.587   13:38:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:12:22.587   13:38:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:12:22.587   13:38:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2
00:12:24.524   13:38:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:12:24.524    13:38:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:12:24.524    13:38:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:12:24.782   13:38:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:12:24.782   13:38:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:12:24.782   13:38:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0
00:12:24.782    13:38:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL
00:12:24.782    13:38:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)'
00:12:24.782   13:38:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1
00:12:24.782    13:38:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1
00:12:24.782    13:38:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1
00:12:24.782    13:38:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]]
00:12:24.782    13:38:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912
00:12:24.782   13:38:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912
00:12:24.782   13:38:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device
00:12:24.782   13:38:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size ))
00:12:24.782   13:38:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100%
00:12:24.782   13:38:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe
00:12:24.782   13:38:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1
00:12:26.153   13:38:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']'
00:12:26.153   13:38:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1
00:12:26.153   13:38:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:12:26.153   13:38:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:26.153   13:38:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:12:26.153  ************************************
00:12:26.153  START TEST filesystem_ext4
00:12:26.153  ************************************
00:12:26.153   13:38:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1
00:12:26.153   13:38:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4
00:12:26.153   13:38:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1
00:12:26.153   13:38:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1
00:12:26.153   13:38:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4
00:12:26.153   13:38:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1
00:12:26.153   13:38:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0
00:12:26.153   13:38:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force
00:12:26.153   13:38:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']'
00:12:26.153   13:38:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F
00:12:26.153   13:38:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1
00:12:26.153  mke2fs 1.47.0 (5-Feb-2023)
00:12:26.153  Discarding device blocks:      0/522240             done                            
00:12:26.153  Creating filesystem with 522240 1k blocks and 130560 inodes
00:12:26.153  Filesystem UUID: 7b0d0497-7f82-4755-a895-c25afcf62381
00:12:26.153  Superblock backups stored on blocks: 
00:12:26.153  	8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409
00:12:26.153  
00:12:26.153  Allocating group tables:  0/64     done                            
00:12:26.153  Writing inode tables:  0/64     done                            
00:12:26.153  Creating journal (8192 blocks): done
00:12:26.153  Writing superblocks and filesystem accounting information:  0/64     done
00:12:26.153  
00:12:26.153   13:38:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0
00:12:26.153   13:38:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device
00:12:26.153   13:38:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa
00:12:26.153   13:38:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync
00:12:26.153   13:38:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa
00:12:26.153   13:38:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync
00:12:26.153   13:38:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0
00:12:26.153   13:38:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device
00:12:26.153   13:38:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3230474
00:12:26.153   13:38:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME
00:12:26.153   13:38:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1
00:12:26.153   13:38:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME
00:12:26.153   13:38:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1
00:12:26.153  
00:12:26.153  real	0m0.206s
00:12:26.153  user	0m0.034s
00:12:26.153  sys	0m0.071s
00:12:26.153   13:38:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:26.153   13:38:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x
00:12:26.153  ************************************
00:12:26.153  END TEST filesystem_ext4
00:12:26.153  ************************************
00:12:26.153   13:38:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1
00:12:26.153   13:38:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:12:26.154   13:38:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:26.154   13:38:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:12:26.154  ************************************
00:12:26.154  START TEST filesystem_btrfs
00:12:26.154  ************************************
00:12:26.154   13:38:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1
00:12:26.154   13:38:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs
00:12:26.154   13:38:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1
00:12:26.154   13:38:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1
00:12:26.154   13:38:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs
00:12:26.154   13:38:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1
00:12:26.154   13:38:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0
00:12:26.154   13:38:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force
00:12:26.154   13:38:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']'
00:12:26.154   13:38:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f
00:12:26.154   13:38:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1
00:12:26.412  btrfs-progs v6.8.1
00:12:26.412  See https://btrfs.readthedocs.io for more information.
00:12:26.412  
00:12:26.412  Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ...
00:12:26.412  NOTE: several default settings have changed in version 5.15, please make sure
00:12:26.412        this does not affect your deployments:
00:12:26.412        - DUP for metadata (-m dup)
00:12:26.412        - enabled no-holes (-O no-holes)
00:12:26.412        - enabled free-space-tree (-R free-space-tree)
00:12:26.412  
00:12:26.412  Label:              (null)
00:12:26.412  UUID:               0a67f4e8-603d-4449-abf6-01caa59665ae
00:12:26.412  Node size:          16384
00:12:26.412  Sector size:        4096	(CPU page size: 4096)
00:12:26.412  Filesystem size:    510.00MiB
00:12:26.412  Block group profiles:
00:12:26.412    Data:             single            8.00MiB
00:12:26.412    Metadata:         DUP              32.00MiB
00:12:26.412    System:           DUP               8.00MiB
00:12:26.412  SSD detected:       yes
00:12:26.412  Zoned device:       no
00:12:26.412  Features:           extref, skinny-metadata, no-holes, free-space-tree
00:12:26.412  Checksum:           crc32c
00:12:26.412  Number of devices:  1
00:12:26.412  Devices:
00:12:26.412     ID        SIZE  PATH          
00:12:26.412      1   510.00MiB  /dev/nvme0n1p1
00:12:26.412  
00:12:26.412   13:38:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0
00:12:26.412   13:38:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device
00:12:26.412   13:38:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa
00:12:26.412   13:38:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync
00:12:26.412   13:38:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa
00:12:26.412   13:38:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync
00:12:26.412   13:38:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0
00:12:26.412   13:38:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device
00:12:26.412   13:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3230474
00:12:26.412   13:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME
00:12:26.412   13:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1
00:12:26.412   13:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME
00:12:26.412   13:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1
00:12:26.412  
00:12:26.412  real	0m0.258s
00:12:26.412  user	0m0.032s
00:12:26.412  sys	0m0.127s
00:12:26.412   13:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:26.412   13:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x
00:12:26.412  ************************************
00:12:26.412  END TEST filesystem_btrfs
00:12:26.412  ************************************
00:12:26.412   13:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1
00:12:26.412   13:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:12:26.412   13:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:26.412   13:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:12:26.412  ************************************
00:12:26.412  START TEST filesystem_xfs
00:12:26.412  ************************************
00:12:26.412   13:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1
00:12:26.412   13:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs
00:12:26.412   13:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1
00:12:26.412   13:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1
00:12:26.412   13:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs
00:12:26.412   13:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1
00:12:26.412   13:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0
00:12:26.413   13:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force
00:12:26.413   13:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']'
00:12:26.413   13:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f
00:12:26.413   13:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1
00:12:26.671  meta-data=/dev/nvme0n1p1         isize=512    agcount=4, agsize=32640 blks
00:12:26.671           =                       sectsz=512   attr=2, projid32bit=1
00:12:26.671           =                       crc=1        finobt=1, sparse=1, rmapbt=0
00:12:26.671           =                       reflink=1    bigtime=1 inobtcount=1 nrext64=0
00:12:26.671  data     =                       bsize=4096   blocks=130560, imaxpct=25
00:12:26.671           =                       sunit=0      swidth=0 blks
00:12:26.671  naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
00:12:26.671  log      =internal log           bsize=4096   blocks=16384, version=2
00:12:26.671           =                       sectsz=512   sunit=0 blks, lazy-count=1
00:12:26.671  realtime =none                   extsz=4096   blocks=0, rtextents=0
00:12:26.671  Discarding blocks...Done.
00:12:26.671   13:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0
00:12:26.671   13:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device
00:12:26.671   13:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa
00:12:26.671   13:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync
00:12:26.671   13:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa
00:12:26.671   13:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync
00:12:26.671   13:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0
00:12:26.671   13:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device
00:12:26.671   13:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3230474
00:12:26.671   13:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME
00:12:26.671   13:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1
00:12:26.671   13:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME
00:12:26.671   13:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1
00:12:26.671  
00:12:26.671  real	0m0.217s
00:12:26.671  user	0m0.031s
00:12:26.671  sys	0m0.078s
00:12:26.671   13:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:26.671   13:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x
00:12:26.671  ************************************
00:12:26.671  END TEST filesystem_xfs
00:12:26.671  ************************************
00:12:26.671   13:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1
00:12:26.929   13:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync
00:12:26.929   13:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:12:27.861  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:12:27.861   13:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:12:27.861   13:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0
00:12:27.861   13:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:12:27.861   13:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME
00:12:27.861   13:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:12:27.861   13:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME
00:12:27.861   13:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0
00:12:27.861   13:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:12:27.861   13:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:27.861   13:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:12:27.861   13:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:27.861   13:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT
00:12:27.861   13:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3230474
00:12:27.861   13:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 3230474 ']'
00:12:27.861   13:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 3230474
00:12:27.861    13:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname
00:12:27.861   13:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:12:27.861    13:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3230474
00:12:27.861   13:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:12:27.861   13:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:12:27.861   13:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3230474'
00:12:27.861  killing process with pid 3230474
00:12:27.861   13:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 3230474
00:12:27.861   13:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 3230474
00:12:31.140   13:38:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid=
00:12:31.140  
00:12:31.140  real	0m10.757s
00:12:31.140  user	0m40.350s
00:12:31.140  sys	0m1.465s
00:12:31.140   13:38:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:31.140   13:38:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:12:31.140  ************************************
00:12:31.140  END TEST nvmf_filesystem_no_in_capsule
00:12:31.140  ************************************
00:12:31.140   13:38:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096
00:12:31.140   13:38:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:12:31.140   13:38:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:31.140   13:38:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x
00:12:31.140  ************************************
00:12:31.140  START TEST nvmf_filesystem_in_capsule
00:12:31.140  ************************************
00:12:31.140   13:38:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096
00:12:31.140   13:38:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096
00:12:31.140   13:38:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF
00:12:31.140   13:38:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:12:31.140   13:38:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable
00:12:31.140   13:38:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:12:31.140   13:38:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3232504
00:12:31.140   13:38:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3232504
00:12:31.140   13:38:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:12:31.140   13:38:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 3232504 ']'
00:12:31.140   13:38:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:12:31.140   13:38:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100
00:12:31.140   13:38:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:12:31.140  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:12:31.140   13:38:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable
00:12:31.140   13:38:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:12:31.140  [2024-12-14 13:38:30.428687] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:12:31.140  [2024-12-14 13:38:30.428781] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:12:31.140  [2024-12-14 13:38:30.565018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:12:31.140  [2024-12-14 13:38:30.672946] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:12:31.140  [2024-12-14 13:38:30.672995] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:12:31.140  [2024-12-14 13:38:30.673008] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:12:31.140  [2024-12-14 13:38:30.673038] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:12:31.140  [2024-12-14 13:38:30.673048] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:12:31.140  [2024-12-14 13:38:30.675620] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:12:31.140  [2024-12-14 13:38:30.675688] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:12:31.140  [2024-12-14 13:38:30.675766] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:12:31.140  [2024-12-14 13:38:30.675778] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:12:31.705   13:38:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:12:31.705   13:38:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0
00:12:31.705   13:38:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:12:31.705   13:38:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable
00:12:31.705   13:38:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:12:31.705   13:38:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:12:31.705   13:38:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1
00:12:31.705   13:38:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096
00:12:31.705   13:38:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:31.705   13:38:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:12:31.705  [2024-12-14 13:38:31.335406] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028540/0x7fa76e9bd940) succeed.
00:12:31.705  [2024-12-14 13:38:31.344887] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000286c0/0x7fa76e979940) succeed.
00:12:31.962   13:38:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:31.962   13:38:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1
00:12:31.962   13:38:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:31.962   13:38:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:12:32.528  Malloc1
00:12:32.528   13:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:32.528   13:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME
00:12:32.528   13:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:32.528   13:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:12:32.528   13:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:32.528   13:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1
00:12:32.528   13:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:32.528   13:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:12:32.528   13:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:32.528   13:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420
00:12:32.528   13:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:32.528   13:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:12:32.528  [2024-12-14 13:38:32.136959] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 ***
00:12:32.528   13:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:32.528    13:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1
00:12:32.528    13:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1
00:12:32.528    13:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info
00:12:32.528    13:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs
00:12:32.528    13:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb
00:12:32.528     13:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1
00:12:32.528     13:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:32.528     13:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:12:32.528     13:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:32.528    13:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[
00:12:32.528  {
00:12:32.528  "name": "Malloc1",
00:12:32.528  "aliases": [
00:12:32.528  "65a88159-3e91-444d-8e77-3b7dbf0bb325"
00:12:32.528  ],
00:12:32.528  "product_name": "Malloc disk",
00:12:32.528  "block_size": 512,
00:12:32.528  "num_blocks": 1048576,
00:12:32.528  "uuid": "65a88159-3e91-444d-8e77-3b7dbf0bb325",
00:12:32.528  "assigned_rate_limits": {
00:12:32.528  "rw_ios_per_sec": 0,
00:12:32.528  "rw_mbytes_per_sec": 0,
00:12:32.528  "r_mbytes_per_sec": 0,
00:12:32.528  "w_mbytes_per_sec": 0
00:12:32.528  },
00:12:32.528  "claimed": true,
00:12:32.528  "claim_type": "exclusive_write",
00:12:32.528  "zoned": false,
00:12:32.528  "supported_io_types": {
00:12:32.528  "read": true,
00:12:32.528  "write": true,
00:12:32.528  "unmap": true,
00:12:32.528  "flush": true,
00:12:32.528  "reset": true,
00:12:32.528  "nvme_admin": false,
00:12:32.528  "nvme_io": false,
00:12:32.528  "nvme_io_md": false,
00:12:32.528  "write_zeroes": true,
00:12:32.528  "zcopy": true,
00:12:32.528  "get_zone_info": false,
00:12:32.528  "zone_management": false,
00:12:32.528  "zone_append": false,
00:12:32.528  "compare": false,
00:12:32.528  "compare_and_write": false,
00:12:32.528  "abort": true,
00:12:32.528  "seek_hole": false,
00:12:32.528  "seek_data": false,
00:12:32.528  "copy": true,
00:12:32.528  "nvme_iov_md": false
00:12:32.528  },
00:12:32.528  "memory_domains": [
00:12:32.528  {
00:12:32.528  "dma_device_id": "system",
00:12:32.528  "dma_device_type": 1
00:12:32.528  },
00:12:32.528  {
00:12:32.528  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:12:32.528  "dma_device_type": 2
00:12:32.528  }
00:12:32.528  ],
00:12:32.528  "driver_specific": {}
00:12:32.528  }
00:12:32.528  ]'
00:12:32.528     13:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:12:32.528    13:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512
00:12:32.528     13:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:12:32.528    13:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576
00:12:32.528    13:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512
00:12:32.528    13:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512
00:12:32.528   13:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912
00:12:32.528   13:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420
00:12:33.900   13:38:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME
00:12:33.900   13:38:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0
00:12:33.900   13:38:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:12:33.900   13:38:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:12:33.900   13:38:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2
00:12:35.797   13:38:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:12:35.797    13:38:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:12:35.798    13:38:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:12:35.798   13:38:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:12:35.798   13:38:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:12:35.798   13:38:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0
00:12:35.798    13:38:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL
00:12:35.798    13:38:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)'
00:12:35.798   13:38:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1
00:12:35.798    13:38:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1
00:12:35.798    13:38:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1
00:12:35.798    13:38:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]]
00:12:35.798    13:38:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912
00:12:35.798   13:38:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912
00:12:35.798   13:38:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device
00:12:35.798   13:38:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size ))
00:12:35.798   13:38:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100%
00:12:35.798   13:38:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe
00:12:35.798   13:38:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1
00:12:36.730   13:38:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']'
00:12:36.730   13:38:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1
00:12:36.730   13:38:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:12:36.730   13:38:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:36.730   13:38:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:12:36.987  ************************************
00:12:36.987  START TEST filesystem_in_capsule_ext4
00:12:36.987  ************************************
00:12:36.987   13:38:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1
00:12:36.987   13:38:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4
00:12:36.987   13:38:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1
00:12:36.987   13:38:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1
00:12:36.987   13:38:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4
00:12:36.987   13:38:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1
00:12:36.987   13:38:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0
00:12:36.987   13:38:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force
00:12:36.987   13:38:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']'
00:12:36.987   13:38:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F
00:12:36.987   13:38:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1
00:12:36.987  mke2fs 1.47.0 (5-Feb-2023)
00:12:36.987  Discarding device blocks:      0/522240             done                            
00:12:36.987  Creating filesystem with 522240 1k blocks and 130560 inodes
00:12:36.987  Filesystem UUID: 73577b10-4a69-4fb9-9084-a8544d9f433f
00:12:36.987  Superblock backups stored on blocks: 
00:12:36.987  	8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409
00:12:36.987  
00:12:36.987  Allocating group tables:  0/64     done                            
00:12:36.987  Writing inode tables:  0/64     done                            
00:12:36.987  Creating journal (8192 blocks): done
00:12:36.987  Writing superblocks and filesystem accounting information:  0/64     done
00:12:36.987  
00:12:36.987   13:38:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0
00:12:36.988   13:38:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device
00:12:36.988   13:38:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa
00:12:36.988   13:38:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync
00:12:36.988   13:38:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa
00:12:36.988   13:38:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync
00:12:36.988   13:38:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0
00:12:36.988   13:38:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device
00:12:36.988   13:38:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3232504
00:12:36.988   13:38:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME
00:12:36.988   13:38:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1
00:12:36.988   13:38:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME
00:12:36.988   13:38:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1
00:12:36.988  
00:12:36.988  real	0m0.209s
00:12:36.988  user	0m0.022s
00:12:36.988  sys	0m0.088s
00:12:36.988   13:38:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:36.988   13:38:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x
00:12:36.988  ************************************
00:12:36.988  END TEST filesystem_in_capsule_ext4
00:12:36.988  ************************************
00:12:37.245   13:38:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1
00:12:37.245   13:38:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:12:37.245   13:38:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:37.245   13:38:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:12:37.245  ************************************
00:12:37.245  START TEST filesystem_in_capsule_btrfs
00:12:37.245  ************************************
00:12:37.245   13:38:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1
00:12:37.245   13:38:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs
00:12:37.245   13:38:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1
00:12:37.245   13:38:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1
00:12:37.245   13:38:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs
00:12:37.245   13:38:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1
00:12:37.245   13:38:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0
00:12:37.245   13:38:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force
00:12:37.245   13:38:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']'
00:12:37.245   13:38:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f
00:12:37.245   13:38:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1
00:12:37.245  btrfs-progs v6.8.1
00:12:37.245  See https://btrfs.readthedocs.io for more information.
00:12:37.245  
00:12:37.245  Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ...
00:12:37.245  NOTE: several default settings have changed in version 5.15, please make sure
00:12:37.245        this does not affect your deployments:
00:12:37.245        - DUP for metadata (-m dup)
00:12:37.245        - enabled no-holes (-O no-holes)
00:12:37.245        - enabled free-space-tree (-R free-space-tree)
00:12:37.245  
00:12:37.245  Label:              (null)
00:12:37.245  UUID:               fa6be18d-e3b1-449f-929d-7549f5c95873
00:12:37.245  Node size:          16384
00:12:37.245  Sector size:        4096	(CPU page size: 4096)
00:12:37.245  Filesystem size:    510.00MiB
00:12:37.245  Block group profiles:
00:12:37.245    Data:             single            8.00MiB
00:12:37.245    Metadata:         DUP              32.00MiB
00:12:37.245    System:           DUP               8.00MiB
00:12:37.245  SSD detected:       yes
00:12:37.245  Zoned device:       no
00:12:37.245  Features:           extref, skinny-metadata, no-holes, free-space-tree
00:12:37.245  Checksum:           crc32c
00:12:37.245  Number of devices:  1
00:12:37.245  Devices:
00:12:37.245     ID        SIZE  PATH          
00:12:37.245      1   510.00MiB  /dev/nvme0n1p1
00:12:37.245  
00:12:37.245   13:38:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0
00:12:37.245   13:38:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device
00:12:37.245   13:38:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa
00:12:37.245   13:38:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync
00:12:37.245   13:38:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa
00:12:37.502   13:38:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync
00:12:37.502   13:38:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0
00:12:37.502   13:38:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device
00:12:37.502   13:38:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3232504
00:12:37.502   13:38:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME
00:12:37.502   13:38:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1
00:12:37.502   13:38:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME
00:12:37.502   13:38:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1
00:12:37.502  
00:12:37.503  real	0m0.249s
00:12:37.503  user	0m0.030s
00:12:37.503  sys	0m0.125s
00:12:37.503   13:38:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:37.503   13:38:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x
00:12:37.503  ************************************
00:12:37.503  END TEST filesystem_in_capsule_btrfs
00:12:37.503  ************************************
00:12:37.503   13:38:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1
00:12:37.503   13:38:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:12:37.503   13:38:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:37.503   13:38:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:12:37.503  ************************************
00:12:37.503  START TEST filesystem_in_capsule_xfs
00:12:37.503  ************************************
00:12:37.503   13:38:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1
00:12:37.503   13:38:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs
00:12:37.503   13:38:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1
00:12:37.503   13:38:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1
00:12:37.503   13:38:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs
00:12:37.503   13:38:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1
00:12:37.503   13:38:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0
00:12:37.503   13:38:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force
00:12:37.503   13:38:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']'
00:12:37.503   13:38:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f
00:12:37.503   13:38:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1
00:12:37.503  meta-data=/dev/nvme0n1p1         isize=512    agcount=4, agsize=32640 blks
00:12:37.503           =                       sectsz=512   attr=2, projid32bit=1
00:12:37.503           =                       crc=1        finobt=1, sparse=1, rmapbt=0
00:12:37.503           =                       reflink=1    bigtime=1 inobtcount=1 nrext64=0
00:12:37.503  data     =                       bsize=4096   blocks=130560, imaxpct=25
00:12:37.503           =                       sunit=0      swidth=0 blks
00:12:37.503  naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
00:12:37.503  log      =internal log           bsize=4096   blocks=16384, version=2
00:12:37.503           =                       sectsz=512   sunit=0 blks, lazy-count=1
00:12:37.503  realtime =none                   extsz=4096   blocks=0, rtextents=0
00:12:37.760  Discarding blocks...Done.
00:12:37.760   13:38:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0
00:12:37.760   13:38:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device
00:12:37.760   13:38:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa
00:12:37.760   13:38:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync
00:12:37.760   13:38:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa
00:12:37.760   13:38:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync
00:12:37.760   13:38:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0
00:12:37.760   13:38:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device
00:12:37.760   13:38:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3232504
00:12:37.760   13:38:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME
00:12:37.760   13:38:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1
00:12:37.760   13:38:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME
00:12:37.760   13:38:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1
00:12:37.760  
00:12:37.760  real	0m0.228s
00:12:37.760  user	0m0.026s
00:12:37.760  sys	0m0.084s
00:12:37.760   13:38:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:37.760   13:38:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x
00:12:37.760  ************************************
00:12:37.760  END TEST filesystem_in_capsule_xfs
00:12:37.760  ************************************
00:12:37.760   13:38:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1
00:12:37.760   13:38:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync
00:12:37.760   13:38:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:12:38.692  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:12:38.692   13:38:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:12:38.692   13:38:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0
00:12:38.693   13:38:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:12:38.693   13:38:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME
00:12:38.693   13:38:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:12:38.693   13:38:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME
00:12:38.950   13:38:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0
00:12:38.950   13:38:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:12:38.950   13:38:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:38.950   13:38:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:12:38.950   13:38:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:38.950   13:38:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT
00:12:38.950   13:38:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3232504
00:12:38.950   13:38:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 3232504 ']'
00:12:38.950   13:38:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 3232504
00:12:38.950    13:38:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname
00:12:38.950   13:38:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:12:38.950    13:38:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3232504
00:12:38.950   13:38:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:12:38.950   13:38:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:12:38.950   13:38:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3232504'
00:12:38.950  killing process with pid 3232504
00:12:38.950   13:38:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 3232504
00:12:38.950   13:38:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 3232504
00:12:42.229   13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid=
00:12:42.229  
00:12:42.229  real	0m11.232s
00:12:42.229  user	0m41.730s
00:12:42.229  sys	0m1.485s
00:12:42.229   13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:42.229   13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:12:42.229  ************************************
00:12:42.229  END TEST nvmf_filesystem_in_capsule
00:12:42.229  ************************************
00:12:42.229   13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini
00:12:42.229   13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup
00:12:42.229   13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync
00:12:42.229   13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' rdma == tcp ']'
00:12:42.229   13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' rdma == rdma ']'
00:12:42.229   13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e
00:12:42.229   13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20}
00:12:42.229   13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma
00:12:42.229  rmmod nvme_rdma
00:12:42.229  rmmod nvme_fabrics
00:12:42.229   13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:12:42.229   13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e
00:12:42.229   13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0
00:12:42.229   13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']'
00:12:42.229   13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:12:42.229   13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]]
00:12:42.229  
00:12:42.229  real	0m29.633s
00:12:42.229  user	1m24.402s
00:12:42.229  sys	0m8.517s
00:12:42.229   13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:42.229   13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x
00:12:42.229  ************************************
00:12:42.229  END TEST nvmf_filesystem
00:12:42.229  ************************************
00:12:42.229   13:38:41 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma
00:12:42.229   13:38:41 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:12:42.229   13:38:41 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:42.229   13:38:41 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:12:42.229  ************************************
00:12:42.229  START TEST nvmf_target_discovery
00:12:42.229  ************************************
00:12:42.229   13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma
00:12:42.229  * Looking for test storage...
00:12:42.229  * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target
00:12:42.229    13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:12:42.229     13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version
00:12:42.229     13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:12:42.229    13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:12:42.229    13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:12:42.229    13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l
00:12:42.229    13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l
00:12:42.229    13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-:
00:12:42.229    13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1
00:12:42.229    13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-:
00:12:42.229    13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2
00:12:42.229    13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<'
00:12:42.229    13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2
00:12:42.229    13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1
00:12:42.229    13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:12:42.229    13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in
00:12:42.229    13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1
00:12:42.229    13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 ))
00:12:42.229    13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:12:42.229     13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1
00:12:42.229     13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1
00:12:42.229     13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:42.229     13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1
00:12:42.229    13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1
00:12:42.229     13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2
00:12:42.229     13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2
00:12:42.229     13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:12:42.229     13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2
00:12:42.229    13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2
00:12:42.229    13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:12:42.229    13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:12:42.229    13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0
00:12:42.229    13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:12:42.229    13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:12:42.229  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:42.229  		--rc genhtml_branch_coverage=1
00:12:42.229  		--rc genhtml_function_coverage=1
00:12:42.229  		--rc genhtml_legend=1
00:12:42.229  		--rc geninfo_all_blocks=1
00:12:42.229  		--rc geninfo_unexecuted_blocks=1
00:12:42.229  		
00:12:42.229  		'
00:12:42.229    13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:12:42.229  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:42.229  		--rc genhtml_branch_coverage=1
00:12:42.229  		--rc genhtml_function_coverage=1
00:12:42.229  		--rc genhtml_legend=1
00:12:42.229  		--rc geninfo_all_blocks=1
00:12:42.229  		--rc geninfo_unexecuted_blocks=1
00:12:42.229  		
00:12:42.229  		'
00:12:42.229    13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:12:42.229  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:42.229  		--rc genhtml_branch_coverage=1
00:12:42.229  		--rc genhtml_function_coverage=1
00:12:42.229  		--rc genhtml_legend=1
00:12:42.229  		--rc geninfo_all_blocks=1
00:12:42.229  		--rc geninfo_unexecuted_blocks=1
00:12:42.229  		
00:12:42.229  		'
00:12:42.229    13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:12:42.229  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:42.229  		--rc genhtml_branch_coverage=1
00:12:42.229  		--rc genhtml_function_coverage=1
00:12:42.229  		--rc genhtml_legend=1
00:12:42.229  		--rc geninfo_all_blocks=1
00:12:42.229  		--rc geninfo_unexecuted_blocks=1
00:12:42.229  		
00:12:42.229  		'
00:12:42.229   13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh
00:12:42.229     13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s
00:12:42.229    13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:12:42.229    13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:12:42.229    13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:12:42.229    13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:12:42.229    13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:12:42.229    13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:12:42.229    13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:12:42.229    13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:12:42.229    13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:12:42.229     13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:12:42.229    13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:12:42.229    13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e
00:12:42.229    13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:12:42.230    13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:12:42.230    13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:12:42.230    13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:12:42.230    13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh
00:12:42.230     13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob
00:12:42.488     13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:12:42.488     13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:12:42.488     13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:12:42.488      13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:42.488      13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:42.488      13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:42.488      13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH
00:12:42.488      13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:42.488    13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0
00:12:42.488    13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:12:42.488    13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:12:42.488    13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:12:42.488    13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:12:42.488    13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:12:42.488    13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:12:42.488  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:12:42.488    13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:12:42.488    13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:12:42.488    13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0
00:12:42.488   13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400
00:12:42.488   13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512
00:12:42.488   13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430
00:12:42.488   13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme
00:12:42.488   13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit
00:12:42.488   13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z rdma ']'
00:12:42.488   13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:12:42.488   13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs
00:12:42.488   13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no
00:12:42.488   13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns
00:12:42.488   13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:12:42.488   13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:12:42.488    13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:12:42.488   13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:12:42.488   13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:12:42.488   13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable
00:12:42.488   13:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:50.596   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:12:50.596   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=()
00:12:50.596   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs
00:12:50.596   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=()
00:12:50.596   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:12:50.596   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=()
00:12:50.596   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers
00:12:50.596   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=()
00:12:50.596   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs
00:12:50.596   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=()
00:12:50.596   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810
00:12:50.596   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=()
00:12:50.596   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722
00:12:50.596   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=()
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ rdma == rdma ]]
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}")
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}")
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]]
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}")
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)'
00:12:50.597  Found 0000:d9:00.0 (0x15b3 - 0x1015)
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)'
00:12:50.597  Found 0000:d9:00.1 (0x15b3 - 0x1015)
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]]
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0'
00:12:50.597  Found net devices under 0000:d9:00.0: mlx_0_0
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1'
00:12:50.597  Found net devices under 0000:d9:00.1: mlx_0_1
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ rdma == tcp ]]
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@447 -- # [[ rdma == rdma ]]
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # rdma_device_init
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@529 -- # load_ib_rdma_modules
00:12:50.597    13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@62 -- # uname
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']'
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@66 -- # modprobe ib_cm
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@67 -- # modprobe ib_core
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@68 -- # modprobe ib_umad
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@69 -- # modprobe ib_uverbs
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@70 -- # modprobe iw_cm
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@71 -- # modprobe rdma_cm
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@72 -- # modprobe rdma_ucm
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@530 -- # allocate_nic_ips
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR ))
00:12:50.597    13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # get_rdma_if_list
00:12:50.597    13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:12:50.597    13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:12:50.597     13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:12:50.597     13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:12:50.597    13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:12:50.597    13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:12:50.597    13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:12:50.597    13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:12:50.597    13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_0
00:12:50.597    13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2
00:12:50.597    13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:12:50.597    13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:12:50.597    13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:12:50.597    13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:12:50.597    13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:12:50.597    13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_1
00:12:50.597    13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:12:50.597    13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0
00:12:50.597    13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:12:50.597    13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:12:50.597    13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}'
00:12:50.597    13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # ip=192.168.100.8
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]]
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@85 -- # ip addr show mlx_0_0
00:12:50.597  6: mlx_0_0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:12:50.597      link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff
00:12:50.597      altname enp217s0f0np0
00:12:50.597      altname ens818f0np0
00:12:50.597      inet 192.168.100.8/24 scope global mlx_0_0
00:12:50.597         valid_lft forever preferred_lft forever
00:12:50.597   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:12:50.597    13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1
00:12:50.597    13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:12:50.598    13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:12:50.598    13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}'
00:12:50.598    13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1
00:12:50.598   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # ip=192.168.100.9
00:12:50.598   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]]
00:12:50.598   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@85 -- # ip addr show mlx_0_1
00:12:50.598  7: mlx_0_1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:12:50.598      link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff
00:12:50.598      altname enp217s0f1np1
00:12:50.598      altname ens818f1np1
00:12:50.598      inet 192.168.100.9/24 scope global mlx_0_1
00:12:50.598         valid_lft forever preferred_lft forever
00:12:50.598   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0
00:12:50.598   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:12:50.598   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma'
00:12:50.598   13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]]
00:12:50.598    13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # get_available_rdma_ips
00:12:50.598     13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # get_rdma_if_list
00:12:50.598     13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:12:50.598     13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:12:50.598      13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:12:50.598      13:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:12:50.598     13:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:12:50.598     13:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:12:50.598     13:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:12:50.598     13:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:12:50.598     13:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_0
00:12:50.598     13:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2
00:12:50.598     13:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:12:50.598     13:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:12:50.598     13:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:12:50.598     13:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:12:50.598     13:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:12:50.598     13:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_1
00:12:50.598     13:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2
00:12:50.598    13:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:12:50.598    13:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0
00:12:50.598    13:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:12:50.598    13:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:12:50.598    13:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}'
00:12:50.598    13:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1
00:12:50.598    13:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:12:50.598    13:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1
00:12:50.598    13:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:12:50.598    13:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:12:50.598    13:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}'
00:12:50.598    13:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1
00:12:50.598   13:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8
00:12:50.598  192.168.100.9'
00:12:50.598    13:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # echo '192.168.100.8
00:12:50.598  192.168.100.9'
00:12:50.598    13:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # head -n 1
00:12:50.598   13:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8
00:12:50.598    13:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # echo '192.168.100.8
00:12:50.598  192.168.100.9'
00:12:50.598    13:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # head -n 1
00:12:50.598    13:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # tail -n +2
00:12:50.598   13:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9
00:12:50.598   13:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']'
00:12:50.598   13:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024'
00:12:50.598   13:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' rdma == tcp ']'
00:12:50.598   13:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' rdma == rdma ']'
00:12:50.598   13:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-rdma
00:12:50.598   13:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF
00:12:50.598   13:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:12:50.598   13:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable
00:12:50.598   13:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:50.598   13:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=3237975
00:12:50.598   13:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:12:50.598   13:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 3237975
00:12:50.598   13:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 3237975 ']'
00:12:50.598   13:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:12:50.598   13:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100
00:12:50.598   13:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:12:50.598  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:12:50.598   13:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable
00:12:50.598   13:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:50.598  [2024-12-14 13:38:49.188514] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:12:50.598  [2024-12-14 13:38:49.188604] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:12:50.598  [2024-12-14 13:38:49.322042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:12:50.598  [2024-12-14 13:38:49.425500] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:12:50.598  [2024-12-14 13:38:49.425553] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:12:50.598  [2024-12-14 13:38:49.425565] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:12:50.598  [2024-12-14 13:38:49.425578] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:12:50.598  [2024-12-14 13:38:49.425589] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:12:50.598  [2024-12-14 13:38:49.428123] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:12:50.598  [2024-12-14 13:38:49.428169] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:12:50.598  [2024-12-14 13:38:49.428200] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:12:50.598  [2024-12-14 13:38:49.428337] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:12:50.598   13:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:12:50.598   13:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0
00:12:50.598   13:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:12:50.598   13:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable
00:12:50.598   13:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:50.598   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:12:50.598   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192
00:12:50.598   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:50.598   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:50.598  [2024-12-14 13:38:50.069613] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028540/0x7fcd493a4940) succeed.
00:12:50.598  [2024-12-14 13:38:50.080362] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000286c0/0x7fcd49360940) succeed.
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:50.857    13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4)
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:50.857  Null1
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:50.857  [2024-12-14 13:38:50.389499] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 ***
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4)
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:50.857  Null2
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4)
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:50.857  Null3
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4)
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:50.857  Null4
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:50.857   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:50.858   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:50.858   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 4420
00:12:51.116  
00:12:51.116  Discovery Log Number of Records 6, Generation counter 6
00:12:51.116  =====Discovery Log Entry 0======
00:12:51.116  trtype:  rdma
00:12:51.116  adrfam:  ipv4
00:12:51.116  subtype: current discovery subsystem
00:12:51.116  treq:    not required
00:12:51.116  portid:  0
00:12:51.116  trsvcid: 4420
00:12:51.116  subnqn:  nqn.2014-08.org.nvmexpress.discovery
00:12:51.116  traddr:  192.168.100.8
00:12:51.116  eflags:  explicit discovery connections, duplicate discovery information
00:12:51.116  rdma_prtype: not specified
00:12:51.116  rdma_qptype: connected
00:12:51.116  rdma_cms:    rdma-cm
00:12:51.116  rdma_pkey: 0x0000
00:12:51.116  =====Discovery Log Entry 1======
00:12:51.116  trtype:  rdma
00:12:51.116  adrfam:  ipv4
00:12:51.116  subtype: nvme subsystem
00:12:51.116  treq:    not required
00:12:51.116  portid:  0
00:12:51.116  trsvcid: 4420
00:12:51.116  subnqn:  nqn.2016-06.io.spdk:cnode1
00:12:51.116  traddr:  192.168.100.8
00:12:51.116  eflags:  none
00:12:51.116  rdma_prtype: not specified
00:12:51.116  rdma_qptype: connected
00:12:51.116  rdma_cms:    rdma-cm
00:12:51.116  rdma_pkey: 0x0000
00:12:51.116  =====Discovery Log Entry 2======
00:12:51.116  trtype:  rdma
00:12:51.116  adrfam:  ipv4
00:12:51.116  subtype: nvme subsystem
00:12:51.116  treq:    not required
00:12:51.116  portid:  0
00:12:51.116  trsvcid: 4420
00:12:51.116  subnqn:  nqn.2016-06.io.spdk:cnode2
00:12:51.116  traddr:  192.168.100.8
00:12:51.116  eflags:  none
00:12:51.116  rdma_prtype: not specified
00:12:51.116  rdma_qptype: connected
00:12:51.116  rdma_cms:    rdma-cm
00:12:51.116  rdma_pkey: 0x0000
00:12:51.116  =====Discovery Log Entry 3======
00:12:51.116  trtype:  rdma
00:12:51.116  adrfam:  ipv4
00:12:51.116  subtype: nvme subsystem
00:12:51.116  treq:    not required
00:12:51.116  portid:  0
00:12:51.116  trsvcid: 4420
00:12:51.116  subnqn:  nqn.2016-06.io.spdk:cnode3
00:12:51.116  traddr:  192.168.100.8
00:12:51.116  eflags:  none
00:12:51.116  rdma_prtype: not specified
00:12:51.116  rdma_qptype: connected
00:12:51.116  rdma_cms:    rdma-cm
00:12:51.116  rdma_pkey: 0x0000
00:12:51.116  =====Discovery Log Entry 4======
00:12:51.116  trtype:  rdma
00:12:51.116  adrfam:  ipv4
00:12:51.116  subtype: nvme subsystem
00:12:51.116  treq:    not required
00:12:51.116  portid:  0
00:12:51.116  trsvcid: 4420
00:12:51.116  subnqn:  nqn.2016-06.io.spdk:cnode4
00:12:51.116  traddr:  192.168.100.8
00:12:51.116  eflags:  none
00:12:51.116  rdma_prtype: not specified
00:12:51.116  rdma_qptype: connected
00:12:51.116  rdma_cms:    rdma-cm
00:12:51.116  rdma_pkey: 0x0000
00:12:51.116  =====Discovery Log Entry 5======
00:12:51.116  trtype:  rdma
00:12:51.116  adrfam:  ipv4
00:12:51.116  subtype: discovery subsystem referral
00:12:51.116  treq:    not required
00:12:51.116  portid:  0
00:12:51.116  trsvcid: 4430
00:12:51.116  subnqn:  nqn.2014-08.org.nvmexpress.discovery
00:12:51.116  traddr:  192.168.100.8
00:12:51.116  eflags:  none
00:12:51.116  rdma_prtype: unrecognized
00:12:51.116  rdma_qptype: unrecognized
00:12:51.116  rdma_cms:    unrecognized
00:12:51.116  rdma_pkey: 0x0000
00:12:51.116   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC'
00:12:51.116  Perform nvmf subsystem discovery via RPC
00:12:51.116   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems
00:12:51.116   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:51.116   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:51.116  [
00:12:51.116  {
00:12:51.116  "nqn": "nqn.2014-08.org.nvmexpress.discovery",
00:12:51.116  "subtype": "Discovery",
00:12:51.116  "listen_addresses": [
00:12:51.116  {
00:12:51.116  "trtype": "RDMA",
00:12:51.116  "adrfam": "IPv4",
00:12:51.116  "traddr": "192.168.100.8",
00:12:51.116  "trsvcid": "4420"
00:12:51.116  }
00:12:51.116  ],
00:12:51.116  "allow_any_host": true,
00:12:51.116  "hosts": []
00:12:51.116  },
00:12:51.116  {
00:12:51.116  "nqn": "nqn.2016-06.io.spdk:cnode1",
00:12:51.116  "subtype": "NVMe",
00:12:51.116  "listen_addresses": [
00:12:51.116  {
00:12:51.116  "trtype": "RDMA",
00:12:51.116  "adrfam": "IPv4",
00:12:51.116  "traddr": "192.168.100.8",
00:12:51.116  "trsvcid": "4420"
00:12:51.116  }
00:12:51.116  ],
00:12:51.116  "allow_any_host": true,
00:12:51.116  "hosts": [],
00:12:51.116  "serial_number": "SPDK00000000000001",
00:12:51.116  "model_number": "SPDK bdev Controller",
00:12:51.116  "max_namespaces": 32,
00:12:51.116  "min_cntlid": 1,
00:12:51.116  "max_cntlid": 65519,
00:12:51.116  "namespaces": [
00:12:51.116  {
00:12:51.116  "nsid": 1,
00:12:51.116  "bdev_name": "Null1",
00:12:51.116  "name": "Null1",
00:12:51.116  "nguid": "D774274DF0C2449AB10E60059F53A0CB",
00:12:51.116  "uuid": "d774274d-f0c2-449a-b10e-60059f53a0cb"
00:12:51.116  }
00:12:51.116  ]
00:12:51.116  },
00:12:51.116  {
00:12:51.116  "nqn": "nqn.2016-06.io.spdk:cnode2",
00:12:51.116  "subtype": "NVMe",
00:12:51.116  "listen_addresses": [
00:12:51.116  {
00:12:51.116  "trtype": "RDMA",
00:12:51.116  "adrfam": "IPv4",
00:12:51.116  "traddr": "192.168.100.8",
00:12:51.116  "trsvcid": "4420"
00:12:51.116  }
00:12:51.116  ],
00:12:51.116  "allow_any_host": true,
00:12:51.116  "hosts": [],
00:12:51.116  "serial_number": "SPDK00000000000002",
00:12:51.116  "model_number": "SPDK bdev Controller",
00:12:51.116  "max_namespaces": 32,
00:12:51.116  "min_cntlid": 1,
00:12:51.116  "max_cntlid": 65519,
00:12:51.116  "namespaces": [
00:12:51.116  {
00:12:51.116  "nsid": 1,
00:12:51.116  "bdev_name": "Null2",
00:12:51.116  "name": "Null2",
00:12:51.116  "nguid": "0DCA31CC20E54733869789CBA75ACDF9",
00:12:51.116  "uuid": "0dca31cc-20e5-4733-8697-89cba75acdf9"
00:12:51.116  }
00:12:51.116  ]
00:12:51.116  },
00:12:51.116  {
00:12:51.116  "nqn": "nqn.2016-06.io.spdk:cnode3",
00:12:51.116  "subtype": "NVMe",
00:12:51.116  "listen_addresses": [
00:12:51.116  {
00:12:51.116  "trtype": "RDMA",
00:12:51.116  "adrfam": "IPv4",
00:12:51.116  "traddr": "192.168.100.8",
00:12:51.116  "trsvcid": "4420"
00:12:51.116  }
00:12:51.116  ],
00:12:51.116  "allow_any_host": true,
00:12:51.116  "hosts": [],
00:12:51.116  "serial_number": "SPDK00000000000003",
00:12:51.116  "model_number": "SPDK bdev Controller",
00:12:51.116  "max_namespaces": 32,
00:12:51.116  "min_cntlid": 1,
00:12:51.116  "max_cntlid": 65519,
00:12:51.116  "namespaces": [
00:12:51.116  {
00:12:51.116  "nsid": 1,
00:12:51.116  "bdev_name": "Null3",
00:12:51.116  "name": "Null3",
00:12:51.116  "nguid": "F66CB7B041494673AF1318642F862F9F",
00:12:51.116  "uuid": "f66cb7b0-4149-4673-af13-18642f862f9f"
00:12:51.116  }
00:12:51.116  ]
00:12:51.116  },
00:12:51.116  {
00:12:51.116  "nqn": "nqn.2016-06.io.spdk:cnode4",
00:12:51.116  "subtype": "NVMe",
00:12:51.116  "listen_addresses": [
00:12:51.116  {
00:12:51.116  "trtype": "RDMA",
00:12:51.116  "adrfam": "IPv4",
00:12:51.116  "traddr": "192.168.100.8",
00:12:51.116  "trsvcid": "4420"
00:12:51.116  }
00:12:51.116  ],
00:12:51.116  "allow_any_host": true,
00:12:51.116  "hosts": [],
00:12:51.116  "serial_number": "SPDK00000000000004",
00:12:51.116  "model_number": "SPDK bdev Controller",
00:12:51.116  "max_namespaces": 32,
00:12:51.116  "min_cntlid": 1,
00:12:51.116  "max_cntlid": 65519,
00:12:51.117  "namespaces": [
00:12:51.117  {
00:12:51.117  "nsid": 1,
00:12:51.117  "bdev_name": "Null4",
00:12:51.117  "name": "Null4",
00:12:51.117  "nguid": "F7872DED72C648FC98197CEDFCBB81AE",
00:12:51.117  "uuid": "f7872ded-72c6-48fc-9819-7cedfcbb81ae"
00:12:51.117  }
00:12:51.117  ]
00:12:51.117  }
00:12:51.117  ]
00:12:51.117   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:51.117    13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4
00:12:51.117   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4)
00:12:51.117   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:12:51.117   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:51.117   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:51.117   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:51.117   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1
00:12:51.117   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:51.117   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:51.117   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:51.117   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4)
00:12:51.117   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2
00:12:51.117   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:51.117   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:51.117   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:51.117   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2
00:12:51.117   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:51.117   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:51.117   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:51.117   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4)
00:12:51.117   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3
00:12:51.117   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:51.117   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:51.117   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:51.117   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3
00:12:51.117   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:51.117   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:51.117   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:51.117   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4)
00:12:51.117   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4
00:12:51.117   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:51.117   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:51.117   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:51.117   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4
00:12:51.117   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:51.117   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:51.117   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:51.117   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430
00:12:51.117   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:51.117   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:51.117   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:51.117    13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs
00:12:51.117    13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name'
00:12:51.117    13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:51.117    13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:51.117    13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:51.117   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs=
00:12:51.117   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']'
00:12:51.117   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT
00:12:51.117   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini
00:12:51.117   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup
00:12:51.117   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync
00:12:51.117   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' rdma == tcp ']'
00:12:51.117   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' rdma == rdma ']'
00:12:51.117   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e
00:12:51.117   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20}
00:12:51.117   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma
00:12:51.117  rmmod nvme_rdma
00:12:51.117  rmmod nvme_fabrics
00:12:51.117   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:12:51.117   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e
00:12:51.117   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0
00:12:51.117   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 3237975 ']'
00:12:51.117   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 3237975
00:12:51.117   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 3237975 ']'
00:12:51.117   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 3237975
00:12:51.117    13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname
00:12:51.117   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:12:51.117    13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3237975
00:12:51.375   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:12:51.375   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:12:51.375   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3237975'
00:12:51.375  killing process with pid 3237975
00:12:51.376   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 3237975
00:12:51.376   13:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 3237975
00:12:53.275   13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:12:53.275   13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]]
00:12:53.275  
00:12:53.275  real	0m10.826s
00:12:53.275  user	0m13.076s
00:12:53.275  sys	0m6.013s
00:12:53.275   13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:53.275   13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:53.275  ************************************
00:12:53.275  END TEST nvmf_target_discovery
00:12:53.275  ************************************
00:12:53.275   13:38:52 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma
00:12:53.275   13:38:52 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:12:53.275   13:38:52 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:53.275   13:38:52 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:12:53.275  ************************************
00:12:53.275  START TEST nvmf_referrals
00:12:53.275  ************************************
00:12:53.275   13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma
00:12:53.275  * Looking for test storage...
00:12:53.275  * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target
00:12:53.275    13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:12:53.275     13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version
00:12:53.275     13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:12:53.275    13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:12:53.275    13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:12:53.275    13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l
00:12:53.275    13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l
00:12:53.275    13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-:
00:12:53.275    13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1
00:12:53.275    13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-:
00:12:53.275    13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2
00:12:53.275    13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<'
00:12:53.275    13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2
00:12:53.275    13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1
00:12:53.275    13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:12:53.275    13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in
00:12:53.275    13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1
00:12:53.275    13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 ))
00:12:53.275    13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:12:53.275     13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1
00:12:53.275     13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1
00:12:53.275     13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:53.275     13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1
00:12:53.275    13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1
00:12:53.275     13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2
00:12:53.275     13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2
00:12:53.275     13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:12:53.275     13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2
00:12:53.275    13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2
00:12:53.275    13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:12:53.275    13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:12:53.275    13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0
00:12:53.275    13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:12:53.275    13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:12:53.275  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:53.275  		--rc genhtml_branch_coverage=1
00:12:53.275  		--rc genhtml_function_coverage=1
00:12:53.275  		--rc genhtml_legend=1
00:12:53.275  		--rc geninfo_all_blocks=1
00:12:53.275  		--rc geninfo_unexecuted_blocks=1
00:12:53.275  		
00:12:53.275  		'
00:12:53.275    13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:12:53.275  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:53.275  		--rc genhtml_branch_coverage=1
00:12:53.275  		--rc genhtml_function_coverage=1
00:12:53.275  		--rc genhtml_legend=1
00:12:53.275  		--rc geninfo_all_blocks=1
00:12:53.275  		--rc geninfo_unexecuted_blocks=1
00:12:53.275  		
00:12:53.275  		'
00:12:53.275    13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:12:53.275  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:53.275  		--rc genhtml_branch_coverage=1
00:12:53.275  		--rc genhtml_function_coverage=1
00:12:53.275  		--rc genhtml_legend=1
00:12:53.275  		--rc geninfo_all_blocks=1
00:12:53.275  		--rc geninfo_unexecuted_blocks=1
00:12:53.275  		
00:12:53.275  		'
00:12:53.275    13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:12:53.275  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:53.275  		--rc genhtml_branch_coverage=1
00:12:53.275  		--rc genhtml_function_coverage=1
00:12:53.275  		--rc genhtml_legend=1
00:12:53.275  		--rc geninfo_all_blocks=1
00:12:53.275  		--rc geninfo_unexecuted_blocks=1
00:12:53.275  		
00:12:53.275  		'
00:12:53.275   13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh
00:12:53.275     13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s
00:12:53.275    13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:12:53.275    13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:12:53.275    13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:12:53.275    13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:12:53.275    13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:12:53.275    13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:12:53.275    13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:12:53.275    13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:12:53.275    13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:12:53.275     13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:12:53.275    13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:12:53.275    13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e
00:12:53.275    13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:12:53.275    13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:12:53.275    13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:12:53.275    13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:12:53.275    13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh
00:12:53.275     13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob
00:12:53.275     13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:12:53.275     13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:12:53.275     13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:12:53.275      13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:53.276      13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:53.276      13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:53.276      13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH
00:12:53.276      13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:53.276    13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0
00:12:53.276    13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:12:53.276    13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:12:53.276    13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:12:53.276    13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:12:53.276    13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:12:53.276    13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:12:53.276  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:12:53.276    13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:12:53.276    13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:12:53.276    13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0
00:12:53.276   13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2
00:12:53.276   13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3
00:12:53.276   13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4
00:12:53.276   13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430
00:12:53.276   13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery
00:12:53.276   13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1
00:12:53.276   13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit
00:12:53.276   13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z rdma ']'
00:12:53.276   13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:12:53.276   13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs
00:12:53.276   13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no
00:12:53.276   13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns
00:12:53.276   13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:12:53.276   13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:12:53.276    13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:12:53.276   13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:12:53.276   13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:12:53.276   13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable
00:12:53.276   13:38:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:12:59.828   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:12:59.828   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=()
00:12:59.828   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs
00:12:59.828   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=()
00:12:59.828   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:12:59.828   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=()
00:12:59.828   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers
00:12:59.828   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=()
00:12:59.828   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs
00:12:59.828   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=()
00:12:59.828   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810
00:12:59.828   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=()
00:12:59.828   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722
00:12:59.828   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=()
00:12:59.828   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx
00:12:59.828   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:12:59.828   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:12:59.828   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:12:59.828   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:12:59.828   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:12:59.828   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:12:59.828   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:12:59.828   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:12:59.828   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ rdma == rdma ]]
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}")
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}")
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]]
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}")
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)'
00:12:59.829  Found 0000:d9:00.0 (0x15b3 - 0x1015)
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)'
00:12:59.829  Found 0000:d9:00.1 (0x15b3 - 0x1015)
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]]
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0'
00:12:59.829  Found net devices under 0000:d9:00.0: mlx_0_0
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1'
00:12:59.829  Found net devices under 0000:d9:00.1: mlx_0_1
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ rdma == tcp ]]
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@447 -- # [[ rdma == rdma ]]
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # rdma_device_init
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@529 -- # load_ib_rdma_modules
00:12:59.829    13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@62 -- # uname
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']'
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@66 -- # modprobe ib_cm
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@67 -- # modprobe ib_core
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@68 -- # modprobe ib_umad
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@69 -- # modprobe ib_uverbs
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@70 -- # modprobe iw_cm
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@71 -- # modprobe rdma_cm
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@72 -- # modprobe rdma_ucm
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@530 -- # allocate_nic_ips
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR ))
00:12:59.829    13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # get_rdma_if_list
00:12:59.829    13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:12:59.829    13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:12:59.829     13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:12:59.829     13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:12:59.829    13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:12:59.829    13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:12:59.829    13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:12:59.829    13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:12:59.829    13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_0
00:12:59.829    13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2
00:12:59.829    13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:12:59.829    13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:12:59.829    13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:12:59.829    13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:12:59.829    13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:12:59.829    13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_1
00:12:59.829    13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:12:59.829    13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0
00:12:59.829    13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:12:59.829    13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:12:59.829    13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}'
00:12:59.829    13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # ip=192.168.100.8
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]]
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@85 -- # ip addr show mlx_0_0
00:12:59.829  6: mlx_0_0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:12:59.829      link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff
00:12:59.829      altname enp217s0f0np0
00:12:59.829      altname ens818f0np0
00:12:59.829      inet 192.168.100.8/24 scope global mlx_0_0
00:12:59.829         valid_lft forever preferred_lft forever
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:12:59.829    13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1
00:12:59.829    13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:12:59.829    13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:12:59.829    13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}'
00:12:59.829    13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # ip=192.168.100.9
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]]
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@85 -- # ip addr show mlx_0_1
00:12:59.829  7: mlx_0_1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:12:59.829      link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff
00:12:59.829      altname enp217s0f1np1
00:12:59.829      altname ens818f1np1
00:12:59.829      inet 192.168.100.9/24 scope global mlx_0_1
00:12:59.829         valid_lft forever preferred_lft forever
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma'
00:12:59.829   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]]
00:12:59.829    13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # get_available_rdma_ips
00:12:59.829     13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # get_rdma_if_list
00:12:59.830     13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:12:59.830     13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:12:59.830      13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:12:59.830      13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:12:59.830     13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:12:59.830     13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:12:59.830     13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:12:59.830     13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:12:59.830     13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_0
00:12:59.830     13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2
00:12:59.830     13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:12:59.830     13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:13:00.087     13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:13:00.087     13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:13:00.087     13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:13:00.087     13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_1
00:13:00.087     13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2
00:13:00.087    13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:13:00.087    13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0
00:13:00.087    13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:13:00.087    13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:13:00.087    13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}'
00:13:00.087    13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1
00:13:00.087    13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:13:00.087    13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1
00:13:00.087    13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:13:00.087    13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:13:00.088    13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}'
00:13:00.088    13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1
00:13:00.088   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8
00:13:00.088  192.168.100.9'
00:13:00.088    13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # head -n 1
00:13:00.088    13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # echo '192.168.100.8
00:13:00.088  192.168.100.9'
00:13:00.088   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8
00:13:00.088    13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # echo '192.168.100.8
00:13:00.088  192.168.100.9'
00:13:00.088    13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # tail -n +2
00:13:00.088    13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # head -n 1
00:13:00.088   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9
00:13:00.088   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']'
00:13:00.088   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024'
00:13:00.088   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' rdma == tcp ']'
00:13:00.088   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' rdma == rdma ']'
00:13:00.088   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-rdma
00:13:00.088   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF
00:13:00.088   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:13:00.088   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable
00:13:00.088   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:13:00.088   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=3241959
00:13:00.088   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:13:00.088   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 3241959
00:13:00.088   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 3241959 ']'
00:13:00.088   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:13:00.088   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100
00:13:00.088   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:13:00.088  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:13:00.088   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable
00:13:00.088   13:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:13:00.088  [2024-12-14 13:38:59.734735] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:13:00.088  [2024-12-14 13:38:59.734833] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:13:00.346  [2024-12-14 13:38:59.868118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:13:00.346  [2024-12-14 13:38:59.965742] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:13:00.346  [2024-12-14 13:38:59.965794] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:13:00.346  [2024-12-14 13:38:59.965806] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:13:00.346  [2024-12-14 13:38:59.965818] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:13:00.346  [2024-12-14 13:38:59.965827] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:13:00.346  [2024-12-14 13:38:59.968410] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:13:00.346  [2024-12-14 13:38:59.968487] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:13:00.346  [2024-12-14 13:38:59.968545] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:13:00.346  [2024-12-14 13:38:59.968557] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:13:00.912   13:39:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:13:00.912   13:39:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0
00:13:00.912   13:39:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:13:00.912   13:39:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable
00:13:00.912   13:39:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:13:00.912   13:39:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:13:00.912   13:39:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192
00:13:00.912   13:39:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:00.912   13:39:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:13:00.912  [2024-12-14 13:39:00.636988] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028540/0x7fe956748940) succeed.
00:13:00.912  [2024-12-14 13:39:00.647119] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000286c0/0x7fe95591a940) succeed.
00:13:01.170   13:39:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:01.170   13:39:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery
00:13:01.170   13:39:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:01.170   13:39:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:13:01.428  [2024-12-14 13:39:00.912742] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 ***
00:13:01.428   13:39:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:01.428   13:39:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430
00:13:01.428   13:39:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:01.428   13:39:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:13:01.428   13:39:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:01.428   13:39:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430
00:13:01.428   13:39:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:01.428   13:39:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:13:01.428   13:39:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:01.428   13:39:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430
00:13:01.428   13:39:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:01.428   13:39:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:13:01.428   13:39:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:01.428    13:39:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals
00:13:01.428    13:39:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length
00:13:01.428    13:39:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:01.428    13:39:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:13:01.428    13:39:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:01.428   13:39:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 ))
00:13:01.428    13:39:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc
00:13:01.428    13:39:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]]
00:13:01.428     13:39:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals
00:13:01.428     13:39:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr'
00:13:01.428     13:39:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:01.428     13:39:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:13:01.428     13:39:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort
00:13:01.428     13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:01.428    13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4
00:13:01.428   13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]]
00:13:01.428    13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme
00:13:01.428    13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]]
00:13:01.428    13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]]
00:13:01.428     13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr'
00:13:01.428     13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json
00:13:01.428     13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort
00:13:01.428    13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4
00:13:01.428   13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]]
00:13:01.428   13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430
00:13:01.428   13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:01.428   13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:13:01.428   13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:01.428   13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430
00:13:01.428   13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:01.428   13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:13:01.686   13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:01.686   13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430
00:13:01.686   13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:01.686   13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:13:01.686   13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:01.686    13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals
00:13:01.686    13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length
00:13:01.686    13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:01.686    13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:13:01.686    13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:01.686   13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 ))
00:13:01.686    13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme
00:13:01.686    13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]]
00:13:01.686    13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]]
00:13:01.686     13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json
00:13:01.686     13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr'
00:13:01.686     13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort
00:13:01.686    13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo
00:13:01.686   13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]]
00:13:01.686   13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery
00:13:01.686   13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:01.686   13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:13:01.686   13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:01.686   13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1
00:13:01.686   13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:01.686   13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:13:01.686   13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:01.686    13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc
00:13:01.686    13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]]
00:13:01.686     13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals
00:13:01.686     13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:01.686     13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:13:01.686     13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr'
00:13:01.686     13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort
00:13:01.686     13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:01.686    13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2
00:13:01.686   13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]]
00:13:01.686    13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme
00:13:01.686    13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]]
00:13:01.686    13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]]
00:13:01.686     13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json
00:13:01.686     13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr'
00:13:01.686     13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort
00:13:01.943    13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2
00:13:01.943   13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]]
00:13:01.943    13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem'
00:13:01.943    13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn
00:13:01.943    13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem'
00:13:01.943    13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json
00:13:01.943    13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")'
00:13:01.944   13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]]
00:13:01.944    13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral'
00:13:01.944    13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral'
00:13:01.944    13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn
00:13:01.944    13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")'
00:13:01.944    13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json
00:13:02.202   13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]]
00:13:02.202   13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1
00:13:02.202   13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:02.202   13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:13:02.202   13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:02.202    13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc
00:13:02.202    13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]]
00:13:02.202     13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals
00:13:02.202     13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:02.202     13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr'
00:13:02.202     13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:13:02.202     13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort
00:13:02.202     13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:02.202    13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2
00:13:02.202   13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]]
00:13:02.202    13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme
00:13:02.202    13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]]
00:13:02.202    13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]]
00:13:02.202     13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json
00:13:02.202     13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr'
00:13:02.202     13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort
00:13:02.202    13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2
00:13:02.202   13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]]
00:13:02.202    13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem'
00:13:02.202    13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem'
00:13:02.202    13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn
00:13:02.202    13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json
00:13:02.202    13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")'
00:13:02.460   13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]]
00:13:02.460    13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral'
00:13:02.460    13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn
00:13:02.460    13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral'
00:13:02.460    13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")'
00:13:02.460    13:39:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json
00:13:02.460   13:39:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]]
00:13:02.460   13:39:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery
00:13:02.460   13:39:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:02.460   13:39:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:13:02.460   13:39:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:02.460    13:39:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals
00:13:02.460    13:39:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:02.460    13:39:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length
00:13:02.460    13:39:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:13:02.460    13:39:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:02.460   13:39:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 ))
00:13:02.461    13:39:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme
00:13:02.461    13:39:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]]
00:13:02.461    13:39:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]]
00:13:02.461     13:39:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json
00:13:02.461     13:39:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr'
00:13:02.461     13:39:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort
00:13:02.719    13:39:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo
00:13:02.719   13:39:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]]
00:13:02.719   13:39:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT
00:13:02.719   13:39:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini
00:13:02.719   13:39:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup
00:13:02.719   13:39:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync
00:13:02.719   13:39:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' rdma == tcp ']'
00:13:02.719   13:39:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' rdma == rdma ']'
00:13:02.719   13:39:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e
00:13:02.719   13:39:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20}
00:13:02.719   13:39:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma
00:13:02.719  rmmod nvme_rdma
00:13:02.719  rmmod nvme_fabrics
00:13:02.719   13:39:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:13:02.719   13:39:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e
00:13:02.719   13:39:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0
00:13:02.719   13:39:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 3241959 ']'
00:13:02.719   13:39:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 3241959
00:13:02.719   13:39:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 3241959 ']'
00:13:02.719   13:39:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 3241959
00:13:02.719    13:39:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname
00:13:02.719   13:39:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:13:02.719    13:39:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3241959
00:13:02.719   13:39:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:13:02.719   13:39:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:13:02.719   13:39:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3241959'
00:13:02.719  killing process with pid 3241959
00:13:02.719   13:39:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 3241959
00:13:02.719   13:39:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 3241959
00:13:04.622   13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:13:04.622   13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]]
00:13:04.622  
00:13:04.622  real	0m11.395s
00:13:04.622  user	0m17.434s
00:13:04.622  sys	0m6.177s
00:13:04.622   13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:04.622   13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:13:04.622  ************************************
00:13:04.622  END TEST nvmf_referrals
00:13:04.622  ************************************
00:13:04.622   13:39:04 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma
00:13:04.622   13:39:04 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:13:04.622   13:39:04 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:04.622   13:39:04 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:13:04.622  ************************************
00:13:04.622  START TEST nvmf_connect_disconnect
00:13:04.622  ************************************
00:13:04.622   13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma
00:13:04.622  * Looking for test storage...
00:13:04.622  * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target
00:13:04.622    13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:13:04.622     13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version
00:13:04.622     13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:13:04.622    13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:13:04.622    13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:13:04.622    13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l
00:13:04.622    13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l
00:13:04.622    13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-:
00:13:04.622    13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1
00:13:04.622    13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-:
00:13:04.622    13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2
00:13:04.622    13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<'
00:13:04.622    13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2
00:13:04.622    13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1
00:13:04.622    13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:13:04.622    13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in
00:13:04.622    13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1
00:13:04.622    13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 ))
00:13:04.622    13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:13:04.622     13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1
00:13:04.622     13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1
00:13:04.622     13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:04.622     13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1
00:13:04.622    13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1
00:13:04.622     13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2
00:13:04.622     13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2
00:13:04.622     13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:13:04.622     13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2
00:13:04.622    13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2
00:13:04.622    13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:13:04.622    13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:13:04.622    13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0
00:13:04.622    13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:13:04.622    13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:13:04.622  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:04.622  		--rc genhtml_branch_coverage=1
00:13:04.622  		--rc genhtml_function_coverage=1
00:13:04.622  		--rc genhtml_legend=1
00:13:04.622  		--rc geninfo_all_blocks=1
00:13:04.622  		--rc geninfo_unexecuted_blocks=1
00:13:04.622  		
00:13:04.622  		'
00:13:04.622    13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:13:04.622  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:04.622  		--rc genhtml_branch_coverage=1
00:13:04.622  		--rc genhtml_function_coverage=1
00:13:04.622  		--rc genhtml_legend=1
00:13:04.622  		--rc geninfo_all_blocks=1
00:13:04.622  		--rc geninfo_unexecuted_blocks=1
00:13:04.622  		
00:13:04.622  		'
00:13:04.622    13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:13:04.622  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:04.623  		--rc genhtml_branch_coverage=1
00:13:04.623  		--rc genhtml_function_coverage=1
00:13:04.623  		--rc genhtml_legend=1
00:13:04.623  		--rc geninfo_all_blocks=1
00:13:04.623  		--rc geninfo_unexecuted_blocks=1
00:13:04.623  		
00:13:04.623  		'
00:13:04.623    13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:13:04.623  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:04.623  		--rc genhtml_branch_coverage=1
00:13:04.623  		--rc genhtml_function_coverage=1
00:13:04.623  		--rc genhtml_legend=1
00:13:04.623  		--rc geninfo_all_blocks=1
00:13:04.623  		--rc geninfo_unexecuted_blocks=1
00:13:04.623  		
00:13:04.623  		'
00:13:04.623   13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh
00:13:04.623     13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s
00:13:04.623    13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:13:04.623    13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:13:04.623    13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:13:04.623    13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:13:04.623    13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:13:04.623    13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:13:04.623    13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:13:04.623    13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:13:04.623    13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:13:04.623     13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:13:04.623    13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:13:04.623    13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e
00:13:04.623    13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:13:04.623    13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:13:04.623    13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:13:04.623    13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:13:04.623    13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh
00:13:04.623     13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob
00:13:04.623     13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:13:04.623     13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:13:04.623     13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:13:04.623      13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:13:04.623      13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:13:04.623      13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:13:04.623      13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH
00:13:04.623      13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:13:04.623    13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0
00:13:04.623    13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:13:04.623    13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:13:04.623    13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:13:04.623    13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:13:04.623    13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:13:04.623    13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:13:04.623  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:13:04.623    13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:13:04.623    13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:13:04.623    13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0
00:13:04.882   13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64
00:13:04.882   13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:13:04.882   13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit
00:13:04.882   13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z rdma ']'
00:13:04.882   13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:13:04.882   13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs
00:13:04.882   13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no
00:13:04.882   13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns
00:13:04.882   13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:13:04.882   13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:13:04.882    13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:13:04.882   13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:13:04.882   13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:13:04.882   13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable
00:13:04.882   13:39:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x
00:13:11.525   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:13:11.525   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=()
00:13:11.525   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs
00:13:11.525   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=()
00:13:11.525   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:13:11.525   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=()
00:13:11.525   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers
00:13:11.525   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=()
00:13:11.525   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs
00:13:11.525   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=()
00:13:11.525   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810
00:13:11.525   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=()
00:13:11.525   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722
00:13:11.525   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=()
00:13:11.525   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx
00:13:11.525   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:13:11.525   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:13:11.525   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:13:11.525   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:13:11.525   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:13:11.525   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:13:11.525   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:13:11.525   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:13:11.525   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:13:11.525   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:13:11.525   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:13:11.525   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:13:11.525   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:13:11.525   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ rdma == rdma ]]
00:13:11.525   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}")
00:13:11.525   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}")
00:13:11.525   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]]
00:13:11.525   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}")
00:13:11.525   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:13:11.525   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:13:11.525   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)'
00:13:11.525  Found 0000:d9:00.0 (0x15b3 - 0x1015)
00:13:11.525   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:13:11.525   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:13:11.525   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:13:11.525   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:13:11.525   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:13:11.525   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:13:11.525   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:13:11.525   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)'
00:13:11.525  Found 0000:d9:00.1 (0x15b3 - 0x1015)
00:13:11.525   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:13:11.525   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:13:11.525   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:13:11.525   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:13:11.525   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:13:11.525   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:13:11.525   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:13:11.525   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]]
00:13:11.525   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:13:11.525   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:13:11.525   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:13:11.525   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:13:11.525   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:13:11.525   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0'
00:13:11.525  Found net devices under 0000:d9:00.0: mlx_0_0
00:13:11.525   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:13:11.525   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:13:11.525   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:13:11.525   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:13:11.525   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:13:11.525   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:13:11.525   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1'
00:13:11.525  Found net devices under 0000:d9:00.1: mlx_0_1
00:13:11.525   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:13:11.526   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:13:11.526   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes
00:13:11.526   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:13:11.526   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ rdma == tcp ]]
00:13:11.526   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@447 -- # [[ rdma == rdma ]]
00:13:11.526   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # rdma_device_init
00:13:11.526   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@529 -- # load_ib_rdma_modules
00:13:11.526    13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # uname
00:13:11.526   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']'
00:13:11.526   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@66 -- # modprobe ib_cm
00:13:11.526   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@67 -- # modprobe ib_core
00:13:11.526   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@68 -- # modprobe ib_umad
00:13:11.526   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@69 -- # modprobe ib_uverbs
00:13:11.526   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@70 -- # modprobe iw_cm
00:13:11.526   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@71 -- # modprobe rdma_cm
00:13:11.526   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@72 -- # modprobe rdma_ucm
00:13:11.526   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@530 -- # allocate_nic_ips
00:13:11.526   13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR ))
00:13:11.526    13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # get_rdma_if_list
00:13:11.526    13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:13:11.526    13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:13:11.526     13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:13:11.526     13:39:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:13:11.526    13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:13:11.526    13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:13:11.526    13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:13:11.526    13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:13:11.526    13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0
00:13:11.526    13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2
00:13:11.526    13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:13:11.526    13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:13:11.526    13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:13:11.526    13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:13:11.526    13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:13:11.526    13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1
00:13:11.526    13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2
00:13:11.526   13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:13:11.526    13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0
00:13:11.526    13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:13:11.526    13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:13:11.526    13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}'
00:13:11.526    13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1
00:13:11.526   13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.8
00:13:11.526   13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]]
00:13:11.526   13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_0
00:13:11.526  6: mlx_0_0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:13:11.526      link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff
00:13:11.526      altname enp217s0f0np0
00:13:11.526      altname ens818f0np0
00:13:11.526      inet 192.168.100.8/24 scope global mlx_0_0
00:13:11.526         valid_lft forever preferred_lft forever
00:13:11.526   13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:13:11.526    13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1
00:13:11.526    13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:13:11.526    13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:13:11.526    13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}'
00:13:11.526    13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1
00:13:11.526   13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.9
00:13:11.526   13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]]
00:13:11.526   13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_1
00:13:11.526  7: mlx_0_1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:13:11.526      link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff
00:13:11.526      altname enp217s0f1np1
00:13:11.526      altname ens818f1np1
00:13:11.526      inet 192.168.100.9/24 scope global mlx_0_1
00:13:11.526         valid_lft forever preferred_lft forever
00:13:11.526   13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0
00:13:11.526   13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:13:11.526   13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma'
00:13:11.526   13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]]
00:13:11.526    13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # get_available_rdma_ips
00:13:11.526     13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # get_rdma_if_list
00:13:11.526     13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:13:11.526     13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:13:11.526      13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:13:11.526      13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:13:11.526     13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:13:11.526     13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:13:11.526     13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:13:11.526     13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:13:11.526     13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0
00:13:11.526     13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2
00:13:11.526     13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:13:11.526     13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:13:11.526     13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:13:11.526     13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:13:11.526     13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:13:11.526     13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1
00:13:11.526     13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2
00:13:11.526    13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:13:11.526    13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0
00:13:11.526    13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:13:11.526    13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:13:11.526    13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}'
00:13:11.526    13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1
00:13:11.526    13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:13:11.526    13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1
00:13:11.526    13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:13:11.526    13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:13:11.526    13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}'
00:13:11.526    13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1
00:13:11.526   13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8
00:13:11.526  192.168.100.9'
00:13:11.526    13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # echo '192.168.100.8
00:13:11.526  192.168.100.9'
00:13:11.526    13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # head -n 1
00:13:11.526   13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8
00:13:11.526    13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # echo '192.168.100.8
00:13:11.526  192.168.100.9'
00:13:11.526    13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # tail -n +2
00:13:11.526    13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # head -n 1
00:13:11.526   13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9
00:13:11.526   13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']'
00:13:11.526   13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024'
00:13:11.526   13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' rdma == tcp ']'
00:13:11.527   13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' rdma == rdma ']'
00:13:11.527   13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-rdma
00:13:11.527   13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF
00:13:11.527   13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:13:11.527   13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable
00:13:11.527   13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x
00:13:11.527   13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=3246575
00:13:11.527   13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:13:11.527   13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 3246575
00:13:11.527   13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 3246575 ']'
00:13:11.527   13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:13:11.527   13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100
00:13:11.527   13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:13:11.527  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:13:11.527   13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable
00:13:11.527   13:39:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x
00:13:11.527  [2024-12-14 13:39:11.251958] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:13:11.527  [2024-12-14 13:39:11.252053] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:13:11.785  [2024-12-14 13:39:11.387166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:13:11.785  [2024-12-14 13:39:11.487236] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:13:11.785  [2024-12-14 13:39:11.487287] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:13:11.785  [2024-12-14 13:39:11.487300] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:13:11.785  [2024-12-14 13:39:11.487313] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:13:11.785  [2024-12-14 13:39:11.487323] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:13:11.785  [2024-12-14 13:39:11.489868] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:13:11.785  [2024-12-14 13:39:11.489950] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:13:11.785  [2024-12-14 13:39:11.490002] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:13:11.785  [2024-12-14 13:39:11.490015] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:13:12.350   13:39:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:13:12.351   13:39:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0
00:13:12.351   13:39:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:13:12.351   13:39:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable
00:13:12.351   13:39:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x
00:13:12.609   13:39:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:13:12.609   13:39:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0
00:13:12.609   13:39:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:12.609   13:39:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x
00:13:12.609  [2024-12-14 13:39:12.107581] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16
00:13:12.609  [2024-12-14 13:39:12.155194] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028540/0x7f0ddb7a4940) succeed.
00:13:12.609  [2024-12-14 13:39:12.165270] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000286c0/0x7f0ddb760940) succeed.
00:13:12.609   13:39:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:12.609    13:39:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512
00:13:12.609    13:39:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:12.609    13:39:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x
00:13:12.867    13:39:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:12.867   13:39:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0
00:13:12.867   13:39:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME
00:13:12.867   13:39:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:12.867   13:39:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x
00:13:12.867   13:39:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:12.867   13:39:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:13:12.867   13:39:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:12.867   13:39:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x
00:13:12.867   13:39:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:12.867   13:39:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420
00:13:12.867   13:39:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:12.867   13:39:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x
00:13:12.867  [2024-12-14 13:39:12.413166] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 ***
00:13:12.867   13:39:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:12.867   13:39:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']'
00:13:12.867   13:39:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100
00:13:12.867   13:39:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8'
00:13:12.867   13:39:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x
00:13:16.147  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:13:19.428  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:13:22.709  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:13:25.990  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:13:28.519  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:13:31.803  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:13:35.085  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:13:38.363  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:13:41.644  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:13:44.925  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:13:47.452  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:13:50.731  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:13:54.012  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:13:57.293  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:14:00.628  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:14:03.155  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:14:06.433  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:14:09.715  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:14:12.995  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:14:16.274  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:14:19.556  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:14:22.084  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:14:25.365  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:14:28.646  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:14:31.928  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:14:35.208  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:14:37.737  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:14:41.018  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:14:44.300  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:14:47.579  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:14:50.924  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:14:53.451  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:14:56.730  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:15:00.012  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:15:03.293  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:15:06.574  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:15:09.854  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:15:12.379  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:15:15.659  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:15:18.939  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:15:22.219  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:15:25.497  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:15:28.028  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:15:31.308  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:15:34.589  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:15:37.870  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:15:41.199  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:15:43.728  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:15:47.008  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:15:50.287  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:15:53.568  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:15:56.848  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:16:00.128  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:16:02.654  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:16:05.933  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:16:09.212  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:16:12.491  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:16:15.773  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:16:19.052  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:16:21.580  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:16:24.860  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:16:28.140  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:16:31.479  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:16:34.759  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:16:37.286  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:16:40.565  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:16:43.849  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:16:47.128  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:16:50.409  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:16:53.689  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:16:56.212  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:16:59.492  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:17:02.774  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:17:06.058  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:17:09.340  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:17:12.623  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:17:15.152  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:17:18.450  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:17:21.775  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:17:25.057  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:17:28.339  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:17:31.622  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:17:34.151  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:17:37.434  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:17:40.716  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:17:43.999  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:17:47.282  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:17:50.565  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:17:53.099  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:17:56.379  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:17:59.661  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:18:02.941  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:18:06.228  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:18:08.847  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:18:12.129  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:18:15.411  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:18:18.692  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:18:21.974  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:18:25.256  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:18:27.785  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:18:27.785   13:44:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT
00:18:27.785   13:44:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini
00:18:27.785   13:44:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup
00:18:27.785   13:44:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync
00:18:27.785   13:44:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' rdma == tcp ']'
00:18:27.785   13:44:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' rdma == rdma ']'
00:18:27.785   13:44:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e
00:18:27.785   13:44:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20}
00:18:27.785   13:44:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma
00:18:27.785  rmmod nvme_rdma
00:18:28.043  rmmod nvme_fabrics
00:18:28.043   13:44:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:18:28.043   13:44:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e
00:18:28.043   13:44:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0
00:18:28.043   13:44:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 3246575 ']'
00:18:28.043   13:44:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 3246575
00:18:28.043   13:44:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3246575 ']'
00:18:28.043   13:44:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 3246575
00:18:28.043    13:44:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname
00:18:28.043   13:44:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:18:28.043    13:44:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3246575
00:18:28.043   13:44:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:18:28.043   13:44:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:18:28.043   13:44:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3246575'
00:18:28.043  killing process with pid 3246575
00:18:28.043   13:44:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 3246575
00:18:28.043   13:44:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 3246575
00:18:29.419   13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:18:29.419   13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]]
00:18:29.419  
00:18:29.419  real	5m25.013s
00:18:29.419  user	21m5.761s
00:18:29.419  sys	0m18.569s
00:18:29.419   13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable
00:18:29.419   13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x
00:18:29.419  ************************************
00:18:29.419  END TEST nvmf_connect_disconnect
00:18:29.419  ************************************
00:18:29.678   13:44:29 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma
00:18:29.678   13:44:29 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:18:29.678   13:44:29 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:18:29.678   13:44:29 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:18:29.678  ************************************
00:18:29.678  START TEST nvmf_multitarget
00:18:29.678  ************************************
00:18:29.678   13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma
00:18:29.678  * Looking for test storage...
00:18:29.678  * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target
00:18:29.678    13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:18:29.678     13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version
00:18:29.678     13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:18:29.678    13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:18:29.678    13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:18:29.678    13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l
00:18:29.678    13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l
00:18:29.678    13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-:
00:18:29.678    13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1
00:18:29.678    13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-:
00:18:29.678    13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2
00:18:29.678    13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<'
00:18:29.678    13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2
00:18:29.678    13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1
00:18:29.678    13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:18:29.678    13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in
00:18:29.678    13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1
00:18:29.678    13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 ))
00:18:29.678    13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:18:29.678     13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1
00:18:29.678     13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1
00:18:29.678     13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:18:29.678     13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1
00:18:29.678    13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1
00:18:29.678     13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2
00:18:29.678     13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2
00:18:29.678     13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:18:29.678     13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2
00:18:29.678    13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2
00:18:29.678    13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:18:29.678    13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:18:29.678    13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0
00:18:29.678    13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:18:29.678    13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:18:29.678  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:29.678  		--rc genhtml_branch_coverage=1
00:18:29.678  		--rc genhtml_function_coverage=1
00:18:29.678  		--rc genhtml_legend=1
00:18:29.678  		--rc geninfo_all_blocks=1
00:18:29.678  		--rc geninfo_unexecuted_blocks=1
00:18:29.678  		
00:18:29.678  		'
00:18:29.678    13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:18:29.678  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:29.678  		--rc genhtml_branch_coverage=1
00:18:29.678  		--rc genhtml_function_coverage=1
00:18:29.678  		--rc genhtml_legend=1
00:18:29.678  		--rc geninfo_all_blocks=1
00:18:29.678  		--rc geninfo_unexecuted_blocks=1
00:18:29.678  		
00:18:29.678  		'
00:18:29.678    13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:18:29.678  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:29.678  		--rc genhtml_branch_coverage=1
00:18:29.678  		--rc genhtml_function_coverage=1
00:18:29.678  		--rc genhtml_legend=1
00:18:29.678  		--rc geninfo_all_blocks=1
00:18:29.678  		--rc geninfo_unexecuted_blocks=1
00:18:29.678  		
00:18:29.678  		'
00:18:29.678    13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:18:29.678  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:29.678  		--rc genhtml_branch_coverage=1
00:18:29.678  		--rc genhtml_function_coverage=1
00:18:29.678  		--rc genhtml_legend=1
00:18:29.678  		--rc geninfo_all_blocks=1
00:18:29.678  		--rc geninfo_unexecuted_blocks=1
00:18:29.678  		
00:18:29.678  		'
00:18:29.678   13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh
00:18:29.678     13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s
00:18:29.678    13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:18:29.678    13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:18:29.678    13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:18:29.678    13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:18:29.678    13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:18:29.678    13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:18:29.678    13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:18:29.678    13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:18:29.678    13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:18:29.678     13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:18:29.938    13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:18:29.938    13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e
00:18:29.938    13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:18:29.938    13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:18:29.938    13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:18:29.938    13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:18:29.938    13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh
00:18:29.938     13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob
00:18:29.938     13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:18:29.938     13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:18:29.938     13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:18:29.938      13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:18:29.938      13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:18:29.938      13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:18:29.938      13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH
00:18:29.938      13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:18:29.938    13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0
00:18:29.938    13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:18:29.938    13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:18:29.938    13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:18:29.938    13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:18:29.938    13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:18:29.938    13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:18:29.938  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:18:29.938    13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:18:29.938    13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:18:29.938    13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0
00:18:29.938   13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py
00:18:29.938   13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit
00:18:29.938   13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z rdma ']'
00:18:29.938   13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:18:29.938   13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs
00:18:29.938   13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no
00:18:29.938   13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns
00:18:29.938   13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:18:29.938   13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:18:29.938    13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:18:29.938   13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:18:29.938   13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:18:29.938   13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable
00:18:29.938   13:44:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x
00:18:36.504   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:18:36.504   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=()
00:18:36.504   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs
00:18:36.504   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=()
00:18:36.504   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:18:36.504   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=()
00:18:36.504   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers
00:18:36.504   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=()
00:18:36.504   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs
00:18:36.504   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=()
00:18:36.504   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810
00:18:36.504   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=()
00:18:36.504   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722
00:18:36.504   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=()
00:18:36.504   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx
00:18:36.504   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:18:36.504   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:18:36.504   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:18:36.504   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:18:36.504   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:18:36.504   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:18:36.504   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:18:36.504   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:18:36.504   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:18:36.504   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:18:36.504   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:18:36.504   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:18:36.504   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:18:36.504   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ rdma == rdma ]]
00:18:36.504   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}")
00:18:36.504   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}")
00:18:36.504   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]]
00:18:36.504   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}")
00:18:36.504   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:18:36.504   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:18:36.504   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)'
00:18:36.504  Found 0000:d9:00.0 (0x15b3 - 0x1015)
00:18:36.504   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:18:36.504   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:18:36.504   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:18:36.504   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:18:36.504   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:18:36.504   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:18:36.504   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:18:36.504   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)'
00:18:36.504  Found 0000:d9:00.1 (0x15b3 - 0x1015)
00:18:36.504   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:18:36.504   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:18:36.504   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:18:36.504   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:18:36.504   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:18:36.504   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:18:36.504   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:18:36.504   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]]
00:18:36.504   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:18:36.504   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:18:36.504   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:18:36.504   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:18:36.505   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:18:36.505   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0'
00:18:36.505  Found net devices under 0000:d9:00.0: mlx_0_0
00:18:36.505   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:18:36.505   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:18:36.505   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:18:36.505   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:18:36.505   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:18:36.505   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:18:36.505   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1'
00:18:36.505  Found net devices under 0000:d9:00.1: mlx_0_1
00:18:36.505   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:18:36.505   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:18:36.505   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes
00:18:36.505   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:18:36.505   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ rdma == tcp ]]
00:18:36.505   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@447 -- # [[ rdma == rdma ]]
00:18:36.505   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # rdma_device_init
00:18:36.505   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@529 -- # load_ib_rdma_modules
00:18:36.505    13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@62 -- # uname
00:18:36.505   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']'
00:18:36.505   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@66 -- # modprobe ib_cm
00:18:36.505   13:44:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@67 -- # modprobe ib_core
00:18:36.505   13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@68 -- # modprobe ib_umad
00:18:36.505   13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@69 -- # modprobe ib_uverbs
00:18:36.505   13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@70 -- # modprobe iw_cm
00:18:36.505   13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@71 -- # modprobe rdma_cm
00:18:36.505   13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@72 -- # modprobe rdma_ucm
00:18:36.505   13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@530 -- # allocate_nic_ips
00:18:36.505   13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR ))
00:18:36.505    13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # get_rdma_if_list
00:18:36.505    13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:18:36.505    13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:18:36.505     13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:18:36.505     13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:18:36.505    13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:18:36.505    13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:18:36.505    13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:18:36.505    13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:18:36.505    13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_0
00:18:36.505    13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2
00:18:36.505    13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:18:36.505    13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:18:36.505    13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:18:36.505    13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:18:36.505    13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:18:36.505    13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_1
00:18:36.505    13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2
00:18:36.505   13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:18:36.505    13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0
00:18:36.505    13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:18:36.505    13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:18:36.505    13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}'
00:18:36.505    13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1
00:18:36.505   13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # ip=192.168.100.8
00:18:36.505   13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]]
00:18:36.505   13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@85 -- # ip addr show mlx_0_0
00:18:36.505  6: mlx_0_0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:18:36.505      link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff
00:18:36.505      altname enp217s0f0np0
00:18:36.505      altname ens818f0np0
00:18:36.505      inet 192.168.100.8/24 scope global mlx_0_0
00:18:36.505         valid_lft forever preferred_lft forever
00:18:36.505   13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:18:36.505    13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1
00:18:36.505    13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:18:36.505    13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:18:36.505    13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}'
00:18:36.505    13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1
00:18:36.505   13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # ip=192.168.100.9
00:18:36.505   13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]]
00:18:36.505   13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@85 -- # ip addr show mlx_0_1
00:18:36.505  7: mlx_0_1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:18:36.505      link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff
00:18:36.505      altname enp217s0f1np1
00:18:36.505      altname ens818f1np1
00:18:36.505      inet 192.168.100.9/24 scope global mlx_0_1
00:18:36.505         valid_lft forever preferred_lft forever
00:18:36.505   13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0
00:18:36.505   13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:18:36.505   13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma'
00:18:36.505   13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]]
00:18:36.505    13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # get_available_rdma_ips
00:18:36.505     13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # get_rdma_if_list
00:18:36.505     13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:18:36.505     13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:18:36.505      13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:18:36.505      13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:18:36.505     13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:18:36.505     13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:18:36.505     13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:18:36.505     13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:18:36.505     13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_0
00:18:36.505     13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2
00:18:36.505     13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:18:36.505     13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:18:36.505     13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:18:36.505     13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:18:36.505     13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:18:36.505     13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_1
00:18:36.505     13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2
00:18:36.505    13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:18:36.505    13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0
00:18:36.505    13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:18:36.505    13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:18:36.505    13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}'
00:18:36.505    13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1
00:18:36.505    13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:18:36.505    13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1
00:18:36.505    13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:18:36.505    13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:18:36.505    13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}'
00:18:36.505    13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1
00:18:36.505   13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8
00:18:36.505  192.168.100.9'
00:18:36.505    13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # echo '192.168.100.8
00:18:36.505  192.168.100.9'
00:18:36.505    13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # head -n 1
00:18:36.505   13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8
00:18:36.505    13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # echo '192.168.100.8
00:18:36.505  192.168.100.9'
00:18:36.506    13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # tail -n +2
00:18:36.506    13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # head -n 1
00:18:36.506   13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9
00:18:36.506   13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']'
00:18:36.506   13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024'
00:18:36.506   13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' rdma == tcp ']'
00:18:36.506   13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' rdma == rdma ']'
00:18:36.506   13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-rdma
00:18:36.506   13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF
00:18:36.506   13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:18:36.506   13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable
00:18:36.506   13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x
00:18:36.506   13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:18:36.506   13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=3305634
00:18:36.506   13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 3305634
00:18:36.506   13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 3305634 ']'
00:18:36.506   13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:18:36.506   13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100
00:18:36.506   13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:18:36.506  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:18:36.506   13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable
00:18:36.506   13:44:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x
00:18:36.765  [2024-12-14 13:44:36.314504] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:18:36.765  [2024-12-14 13:44:36.314600] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:18:36.765  [2024-12-14 13:44:36.449669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:18:37.023  [2024-12-14 13:44:36.553926] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:18:37.023  [2024-12-14 13:44:36.553979] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:18:37.023  [2024-12-14 13:44:36.553992] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:18:37.023  [2024-12-14 13:44:36.554005] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:18:37.023  [2024-12-14 13:44:36.554015] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:18:37.023  [2024-12-14 13:44:36.556393] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:18:37.023  [2024-12-14 13:44:36.556417] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:18:37.023  [2024-12-14 13:44:36.556478] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:18:37.023  [2024-12-14 13:44:36.556486] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:18:37.588   13:44:37 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:18:37.588   13:44:37 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0
00:18:37.588   13:44:37 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:18:37.588   13:44:37 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable
00:18:37.588   13:44:37 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x
00:18:37.588   13:44:37 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:18:37.588   13:44:37 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT
00:18:37.588    13:44:37 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets
00:18:37.588    13:44:37 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length
00:18:37.588   13:44:37 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']'
00:18:37.588   13:44:37 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32
00:18:37.846  "nvmf_tgt_1"
00:18:37.846   13:44:37 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32
00:18:37.846  "nvmf_tgt_2"
00:18:37.846    13:44:37 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets
00:18:37.846    13:44:37 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length
00:18:38.103   13:44:37 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']'
00:18:38.103   13:44:37 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1
00:18:38.103  true
00:18:38.103   13:44:37 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2
00:18:38.103  true
00:18:38.103    13:44:37 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets
00:18:38.103    13:44:37 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length
00:18:38.360   13:44:37 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']'
00:18:38.360   13:44:37 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT
00:18:38.360   13:44:37 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini
00:18:38.360   13:44:37 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup
00:18:38.360   13:44:37 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync
00:18:38.360   13:44:37 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' rdma == tcp ']'
00:18:38.360   13:44:37 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' rdma == rdma ']'
00:18:38.360   13:44:37 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e
00:18:38.360   13:44:37 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20}
00:18:38.360   13:44:37 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma
00:18:38.360  rmmod nvme_rdma
00:18:38.360  rmmod nvme_fabrics
00:18:38.360   13:44:37 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:18:38.360   13:44:37 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e
00:18:38.360   13:44:37 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0
00:18:38.360   13:44:37 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 3305634 ']'
00:18:38.360   13:44:37 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 3305634
00:18:38.360   13:44:37 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 3305634 ']'
00:18:38.360   13:44:37 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 3305634
00:18:38.360    13:44:37 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname
00:18:38.360   13:44:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:18:38.361    13:44:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3305634
00:18:38.361   13:44:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:18:38.361   13:44:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:18:38.361   13:44:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3305634'
00:18:38.361  killing process with pid 3305634
00:18:38.361   13:44:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 3305634
00:18:38.361   13:44:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 3305634
00:18:39.736   13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:18:39.736   13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]]
00:18:39.736  
00:18:39.736  real	0m9.948s
00:18:39.736  user	0m12.690s
00:18:39.736  sys	0m5.849s
00:18:39.736   13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable
00:18:39.736   13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x
00:18:39.736  ************************************
00:18:39.736  END TEST nvmf_multitarget
00:18:39.736  ************************************
00:18:39.736   13:44:39 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma
00:18:39.736   13:44:39 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:18:39.736   13:44:39 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:18:39.736   13:44:39 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:18:39.736  ************************************
00:18:39.736  START TEST nvmf_rpc
00:18:39.736  ************************************
00:18:39.736   13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma
00:18:39.736  * Looking for test storage...
00:18:39.736  * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target
00:18:39.736    13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:18:39.736     13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version
00:18:39.736     13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:18:39.736    13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:18:39.736    13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:18:39.736    13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:18:39.736    13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:18:39.736    13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-:
00:18:39.736    13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1
00:18:39.736    13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-:
00:18:39.736    13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2
00:18:39.736    13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<'
00:18:39.736    13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2
00:18:39.736    13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1
00:18:39.736    13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:18:39.736    13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in
00:18:39.736    13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1
00:18:39.736    13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:18:39.736    13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:18:39.736     13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1
00:18:39.736     13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1
00:18:39.736     13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:18:39.736     13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1
00:18:39.736    13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:18:39.736     13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2
00:18:39.736     13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2
00:18:39.736     13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:18:39.736     13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2
00:18:39.736    13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:18:39.736    13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:18:39.736    13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:18:39.736    13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0
00:18:39.736    13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:18:39.736    13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:18:39.736  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:39.736  		--rc genhtml_branch_coverage=1
00:18:39.736  		--rc genhtml_function_coverage=1
00:18:39.736  		--rc genhtml_legend=1
00:18:39.736  		--rc geninfo_all_blocks=1
00:18:39.736  		--rc geninfo_unexecuted_blocks=1
00:18:39.736  		
00:18:39.736  		'
00:18:39.736    13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:18:39.736  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:39.736  		--rc genhtml_branch_coverage=1
00:18:39.736  		--rc genhtml_function_coverage=1
00:18:39.736  		--rc genhtml_legend=1
00:18:39.736  		--rc geninfo_all_blocks=1
00:18:39.736  		--rc geninfo_unexecuted_blocks=1
00:18:39.736  		
00:18:39.736  		'
00:18:39.736    13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:18:39.736  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:39.736  		--rc genhtml_branch_coverage=1
00:18:39.736  		--rc genhtml_function_coverage=1
00:18:39.736  		--rc genhtml_legend=1
00:18:39.736  		--rc geninfo_all_blocks=1
00:18:39.736  		--rc geninfo_unexecuted_blocks=1
00:18:39.736  		
00:18:39.736  		'
00:18:39.736    13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:18:39.736  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:39.736  		--rc genhtml_branch_coverage=1
00:18:39.737  		--rc genhtml_function_coverage=1
00:18:39.737  		--rc genhtml_legend=1
00:18:39.737  		--rc geninfo_all_blocks=1
00:18:39.737  		--rc geninfo_unexecuted_blocks=1
00:18:39.737  		
00:18:39.737  		'
00:18:39.737   13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh
00:18:39.737     13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s
00:18:39.737    13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:18:39.737    13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:18:39.737    13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:18:39.737    13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:18:39.737    13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:18:39.737    13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:18:39.737    13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:18:39.737    13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:18:39.737    13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:18:39.737     13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:18:39.737    13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:18:39.737    13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e
00:18:39.737    13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:18:39.737    13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:18:39.737    13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:18:39.737    13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:18:39.737    13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh
00:18:39.737     13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob
00:18:39.737     13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:18:39.737     13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:18:39.737     13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:18:39.737      13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:18:39.737      13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:18:39.737      13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:18:39.737      13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH
00:18:39.737      13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:18:39.737    13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0
00:18:39.737    13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:18:39.737    13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:18:39.737    13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:18:39.737    13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:18:39.737    13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:18:39.737    13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:18:39.737  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:18:39.737    13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:18:39.737    13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:18:39.737    13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0
00:18:39.996   13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5
00:18:39.996   13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit
00:18:39.996   13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z rdma ']'
00:18:39.996   13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:18:39.996   13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs
00:18:39.996   13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no
00:18:39.996   13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns
00:18:39.996   13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:18:39.996   13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:18:39.996    13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:18:39.996   13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:18:39.996   13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:18:39.996   13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable
00:18:39.996   13:44:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=()
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=()
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=()
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=()
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=()
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=()
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=()
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ rdma == rdma ]]
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}")
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}")
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]]
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}")
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)'
00:18:46.557  Found 0000:d9:00.0 (0x15b3 - 0x1015)
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)'
00:18:46.557  Found 0000:d9:00.1 (0x15b3 - 0x1015)
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]]
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0'
00:18:46.557  Found net devices under 0000:d9:00.0: mlx_0_0
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1'
00:18:46.557  Found net devices under 0000:d9:00.1: mlx_0_1
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ rdma == tcp ]]
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@447 -- # [[ rdma == rdma ]]
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # rdma_device_init
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@529 -- # load_ib_rdma_modules
00:18:46.557    13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@62 -- # uname
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']'
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@66 -- # modprobe ib_cm
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@67 -- # modprobe ib_core
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@68 -- # modprobe ib_umad
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@69 -- # modprobe ib_uverbs
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@70 -- # modprobe iw_cm
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@71 -- # modprobe rdma_cm
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@72 -- # modprobe rdma_ucm
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@530 -- # allocate_nic_ips
00:18:46.557   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR ))
00:18:46.557    13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # get_rdma_if_list
00:18:46.557    13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:18:46.557    13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:18:46.557     13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:18:46.558     13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:18:46.558    13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:18:46.558    13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:18:46.558    13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:18:46.558    13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:18:46.558    13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_0
00:18:46.558    13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2
00:18:46.558    13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:18:46.558    13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:18:46.558    13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:18:46.558    13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:18:46.558    13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:18:46.558    13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_1
00:18:46.558    13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2
00:18:46.558   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:18:46.558    13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0
00:18:46.558    13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:18:46.558    13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:18:46.558    13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}'
00:18:46.558    13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1
00:18:46.558   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # ip=192.168.100.8
00:18:46.558   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]]
00:18:46.558   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@85 -- # ip addr show mlx_0_0
00:18:46.558  6: mlx_0_0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:18:46.558      link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff
00:18:46.558      altname enp217s0f0np0
00:18:46.558      altname ens818f0np0
00:18:46.558      inet 192.168.100.8/24 scope global mlx_0_0
00:18:46.558         valid_lft forever preferred_lft forever
00:18:46.558   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:18:46.558    13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1
00:18:46.558    13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:18:46.558    13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:18:46.558    13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}'
00:18:46.558    13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1
00:18:46.558   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # ip=192.168.100.9
00:18:46.558   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]]
00:18:46.558   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@85 -- # ip addr show mlx_0_1
00:18:46.558  7: mlx_0_1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:18:46.558      link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff
00:18:46.558      altname enp217s0f1np1
00:18:46.558      altname ens818f1np1
00:18:46.558      inet 192.168.100.9/24 scope global mlx_0_1
00:18:46.558         valid_lft forever preferred_lft forever
00:18:46.558   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0
00:18:46.558   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:18:46.558   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma'
00:18:46.558   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]]
00:18:46.558    13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # get_available_rdma_ips
00:18:46.558     13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # get_rdma_if_list
00:18:46.558     13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:18:46.558     13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:18:46.558      13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:18:46.558      13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:18:46.558     13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:18:46.558     13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:18:46.558     13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:18:46.558     13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:18:46.558     13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_0
00:18:46.558     13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2
00:18:46.558     13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:18:46.558     13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:18:46.558     13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:18:46.558     13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:18:46.558     13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:18:46.558     13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_1
00:18:46.558     13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2
00:18:46.558    13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:18:46.558    13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0
00:18:46.558    13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:18:46.558    13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:18:46.558    13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}'
00:18:46.558    13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1
00:18:46.558    13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:18:46.558    13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1
00:18:46.558    13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:18:46.558    13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:18:46.558    13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}'
00:18:46.558    13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1
00:18:46.558   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8
00:18:46.558  192.168.100.9'
00:18:46.558    13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # echo '192.168.100.8
00:18:46.558  192.168.100.9'
00:18:46.558    13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # head -n 1
00:18:46.558   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8
00:18:46.558    13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # echo '192.168.100.8
00:18:46.558  192.168.100.9'
00:18:46.558    13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # tail -n +2
00:18:46.558    13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # head -n 1
00:18:46.558   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9
00:18:46.558   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']'
00:18:46.558   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024'
00:18:46.558   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' rdma == tcp ']'
00:18:46.558   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' rdma == rdma ']'
00:18:46.558   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-rdma
00:18:46.558   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF
00:18:46.558   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:18:46.558   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable
00:18:46.558   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:18:46.558   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=3309362
00:18:46.558   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 3309362
00:18:46.558   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 3309362 ']'
00:18:46.558   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:18:46.558   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:18:46.558   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:18:46.558  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:18:46.558   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:18:46.558   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:18:46.558   13:44:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:18:46.558  [2024-12-14 13:44:45.800436] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:18:46.558  [2024-12-14 13:44:45.800530] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:18:46.558  [2024-12-14 13:44:45.935630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:18:46.558  [2024-12-14 13:44:46.033217] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:18:46.558  [2024-12-14 13:44:46.033263] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:18:46.558  [2024-12-14 13:44:46.033275] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:18:46.558  [2024-12-14 13:44:46.033305] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:18:46.558  [2024-12-14 13:44:46.033315] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:18:46.558  [2024-12-14 13:44:46.035819] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:18:46.558  [2024-12-14 13:44:46.035895] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:18:46.558  [2024-12-14 13:44:46.035973] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:18:46.559  [2024-12-14 13:44:46.035979] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:18:47.125   13:44:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:18:47.125   13:44:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0
00:18:47.125   13:44:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:18:47.125   13:44:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable
00:18:47.125   13:44:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:18:47.125   13:44:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:18:47.125    13:44:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats
00:18:47.125    13:44:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:47.125    13:44:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:18:47.125    13:44:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:47.125   13:44:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{
00:18:47.125  "tick_rate": 2500000000,
00:18:47.125  "poll_groups": [
00:18:47.125  {
00:18:47.125  "name": "nvmf_tgt_poll_group_000",
00:18:47.125  "admin_qpairs": 0,
00:18:47.125  "io_qpairs": 0,
00:18:47.125  "current_admin_qpairs": 0,
00:18:47.125  "current_io_qpairs": 0,
00:18:47.125  "pending_bdev_io": 0,
00:18:47.125  "completed_nvme_io": 0,
00:18:47.125  "transports": []
00:18:47.125  },
00:18:47.125  {
00:18:47.125  "name": "nvmf_tgt_poll_group_001",
00:18:47.125  "admin_qpairs": 0,
00:18:47.125  "io_qpairs": 0,
00:18:47.125  "current_admin_qpairs": 0,
00:18:47.125  "current_io_qpairs": 0,
00:18:47.125  "pending_bdev_io": 0,
00:18:47.125  "completed_nvme_io": 0,
00:18:47.125  "transports": []
00:18:47.125  },
00:18:47.125  {
00:18:47.125  "name": "nvmf_tgt_poll_group_002",
00:18:47.125  "admin_qpairs": 0,
00:18:47.125  "io_qpairs": 0,
00:18:47.125  "current_admin_qpairs": 0,
00:18:47.125  "current_io_qpairs": 0,
00:18:47.125  "pending_bdev_io": 0,
00:18:47.125  "completed_nvme_io": 0,
00:18:47.125  "transports": []
00:18:47.125  },
00:18:47.125  {
00:18:47.125  "name": "nvmf_tgt_poll_group_003",
00:18:47.125  "admin_qpairs": 0,
00:18:47.125  "io_qpairs": 0,
00:18:47.125  "current_admin_qpairs": 0,
00:18:47.125  "current_io_qpairs": 0,
00:18:47.125  "pending_bdev_io": 0,
00:18:47.125  "completed_nvme_io": 0,
00:18:47.125  "transports": []
00:18:47.125  }
00:18:47.125  ]
00:18:47.125  }'
00:18:47.125    13:44:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name'
00:18:47.125    13:44:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name'
00:18:47.125    13:44:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name'
00:18:47.125    13:44:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l
00:18:47.125   13:44:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 ))
00:18:47.125    13:44:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]'
00:18:47.125   13:44:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]]
00:18:47.125   13:44:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192
00:18:47.125   13:44:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:47.125   13:44:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:18:47.125  [2024-12-14 13:44:46.807019] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028540/0x7ff4f230f940) succeed.
00:18:47.125  [2024-12-14 13:44:46.816965] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000286c0/0x7ff4f19bd940) succeed.
00:18:47.384   13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:47.384    13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats
00:18:47.384    13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:47.384    13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:18:47.642    13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:47.642   13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{
00:18:47.642  "tick_rate": 2500000000,
00:18:47.642  "poll_groups": [
00:18:47.642  {
00:18:47.642  "name": "nvmf_tgt_poll_group_000",
00:18:47.642  "admin_qpairs": 0,
00:18:47.642  "io_qpairs": 0,
00:18:47.642  "current_admin_qpairs": 0,
00:18:47.642  "current_io_qpairs": 0,
00:18:47.642  "pending_bdev_io": 0,
00:18:47.642  "completed_nvme_io": 0,
00:18:47.642  "transports": [
00:18:47.642  {
00:18:47.642  "trtype": "RDMA",
00:18:47.642  "pending_data_buffer": 0,
00:18:47.642  "devices": [
00:18:47.642  {
00:18:47.642  "name": "mlx5_0",
00:18:47.642  "polls": 30127,
00:18:47.642  "idle_polls": 30127,
00:18:47.642  "completions": 0,
00:18:47.642  "requests": 0,
00:18:47.642  "request_latency": 0,
00:18:47.642  "pending_free_request": 0,
00:18:47.642  "pending_rdma_read": 0,
00:18:47.642  "pending_rdma_write": 0,
00:18:47.642  "pending_rdma_send": 0,
00:18:47.642  "total_send_wrs": 0,
00:18:47.642  "send_doorbell_updates": 0,
00:18:47.643  "total_recv_wrs": 4096,
00:18:47.643  "recv_doorbell_updates": 1
00:18:47.643  },
00:18:47.643  {
00:18:47.643  "name": "mlx5_1",
00:18:47.643  "polls": 30127,
00:18:47.643  "idle_polls": 30127,
00:18:47.643  "completions": 0,
00:18:47.643  "requests": 0,
00:18:47.643  "request_latency": 0,
00:18:47.643  "pending_free_request": 0,
00:18:47.643  "pending_rdma_read": 0,
00:18:47.643  "pending_rdma_write": 0,
00:18:47.643  "pending_rdma_send": 0,
00:18:47.643  "total_send_wrs": 0,
00:18:47.643  "send_doorbell_updates": 0,
00:18:47.643  "total_recv_wrs": 4096,
00:18:47.643  "recv_doorbell_updates": 1
00:18:47.643  }
00:18:47.643  ]
00:18:47.643  }
00:18:47.643  ]
00:18:47.643  },
00:18:47.643  {
00:18:47.643  "name": "nvmf_tgt_poll_group_001",
00:18:47.643  "admin_qpairs": 0,
00:18:47.643  "io_qpairs": 0,
00:18:47.643  "current_admin_qpairs": 0,
00:18:47.643  "current_io_qpairs": 0,
00:18:47.643  "pending_bdev_io": 0,
00:18:47.643  "completed_nvme_io": 0,
00:18:47.643  "transports": [
00:18:47.643  {
00:18:47.643  "trtype": "RDMA",
00:18:47.643  "pending_data_buffer": 0,
00:18:47.643  "devices": [
00:18:47.643  {
00:18:47.643  "name": "mlx5_0",
00:18:47.643  "polls": 18587,
00:18:47.643  "idle_polls": 18587,
00:18:47.643  "completions": 0,
00:18:47.643  "requests": 0,
00:18:47.643  "request_latency": 0,
00:18:47.643  "pending_free_request": 0,
00:18:47.643  "pending_rdma_read": 0,
00:18:47.643  "pending_rdma_write": 0,
00:18:47.643  "pending_rdma_send": 0,
00:18:47.643  "total_send_wrs": 0,
00:18:47.643  "send_doorbell_updates": 0,
00:18:47.643  "total_recv_wrs": 4096,
00:18:47.643  "recv_doorbell_updates": 1
00:18:47.643  },
00:18:47.643  {
00:18:47.643  "name": "mlx5_1",
00:18:47.643  "polls": 18587,
00:18:47.643  "idle_polls": 18587,
00:18:47.643  "completions": 0,
00:18:47.643  "requests": 0,
00:18:47.643  "request_latency": 0,
00:18:47.643  "pending_free_request": 0,
00:18:47.643  "pending_rdma_read": 0,
00:18:47.643  "pending_rdma_write": 0,
00:18:47.643  "pending_rdma_send": 0,
00:18:47.643  "total_send_wrs": 0,
00:18:47.643  "send_doorbell_updates": 0,
00:18:47.643  "total_recv_wrs": 4096,
00:18:47.643  "recv_doorbell_updates": 1
00:18:47.643  }
00:18:47.643  ]
00:18:47.643  }
00:18:47.643  ]
00:18:47.643  },
00:18:47.643  {
00:18:47.643  "name": "nvmf_tgt_poll_group_002",
00:18:47.643  "admin_qpairs": 0,
00:18:47.643  "io_qpairs": 0,
00:18:47.643  "current_admin_qpairs": 0,
00:18:47.643  "current_io_qpairs": 0,
00:18:47.643  "pending_bdev_io": 0,
00:18:47.643  "completed_nvme_io": 0,
00:18:47.643  "transports": [
00:18:47.643  {
00:18:47.643  "trtype": "RDMA",
00:18:47.643  "pending_data_buffer": 0,
00:18:47.643  "devices": [
00:18:47.643  {
00:18:47.643  "name": "mlx5_0",
00:18:47.643  "polls": 9785,
00:18:47.643  "idle_polls": 9785,
00:18:47.643  "completions": 0,
00:18:47.643  "requests": 0,
00:18:47.643  "request_latency": 0,
00:18:47.643  "pending_free_request": 0,
00:18:47.643  "pending_rdma_read": 0,
00:18:47.643  "pending_rdma_write": 0,
00:18:47.643  "pending_rdma_send": 0,
00:18:47.643  "total_send_wrs": 0,
00:18:47.643  "send_doorbell_updates": 0,
00:18:47.643  "total_recv_wrs": 4096,
00:18:47.643  "recv_doorbell_updates": 1
00:18:47.643  },
00:18:47.643  {
00:18:47.643  "name": "mlx5_1",
00:18:47.643  "polls": 9785,
00:18:47.643  "idle_polls": 9785,
00:18:47.643  "completions": 0,
00:18:47.643  "requests": 0,
00:18:47.643  "request_latency": 0,
00:18:47.643  "pending_free_request": 0,
00:18:47.643  "pending_rdma_read": 0,
00:18:47.643  "pending_rdma_write": 0,
00:18:47.643  "pending_rdma_send": 0,
00:18:47.643  "total_send_wrs": 0,
00:18:47.643  "send_doorbell_updates": 0,
00:18:47.643  "total_recv_wrs": 4096,
00:18:47.643  "recv_doorbell_updates": 1
00:18:47.643  }
00:18:47.643  ]
00:18:47.643  }
00:18:47.643  ]
00:18:47.643  },
00:18:47.643  {
00:18:47.643  "name": "nvmf_tgt_poll_group_003",
00:18:47.643  "admin_qpairs": 0,
00:18:47.643  "io_qpairs": 0,
00:18:47.643  "current_admin_qpairs": 0,
00:18:47.643  "current_io_qpairs": 0,
00:18:47.643  "pending_bdev_io": 0,
00:18:47.643  "completed_nvme_io": 0,
00:18:47.643  "transports": [
00:18:47.643  {
00:18:47.643  "trtype": "RDMA",
00:18:47.643  "pending_data_buffer": 0,
00:18:47.643  "devices": [
00:18:47.643  {
00:18:47.643  "name": "mlx5_0",
00:18:47.643  "polls": 757,
00:18:47.643  "idle_polls": 757,
00:18:47.643  "completions": 0,
00:18:47.643  "requests": 0,
00:18:47.643  "request_latency": 0,
00:18:47.643  "pending_free_request": 0,
00:18:47.643  "pending_rdma_read": 0,
00:18:47.643  "pending_rdma_write": 0,
00:18:47.643  "pending_rdma_send": 0,
00:18:47.643  "total_send_wrs": 0,
00:18:47.643  "send_doorbell_updates": 0,
00:18:47.643  "total_recv_wrs": 4096,
00:18:47.643  "recv_doorbell_updates": 1
00:18:47.643  },
00:18:47.643  {
00:18:47.643  "name": "mlx5_1",
00:18:47.643  "polls": 757,
00:18:47.643  "idle_polls": 757,
00:18:47.643  "completions": 0,
00:18:47.643  "requests": 0,
00:18:47.643  "request_latency": 0,
00:18:47.643  "pending_free_request": 0,
00:18:47.643  "pending_rdma_read": 0,
00:18:47.643  "pending_rdma_write": 0,
00:18:47.643  "pending_rdma_send": 0,
00:18:47.643  "total_send_wrs": 0,
00:18:47.643  "send_doorbell_updates": 0,
00:18:47.643  "total_recv_wrs": 4096,
00:18:47.643  "recv_doorbell_updates": 1
00:18:47.643  }
00:18:47.643  ]
00:18:47.643  }
00:18:47.643  ]
00:18:47.643  }
00:18:47.643  ]
00:18:47.643  }'
00:18:47.643    13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs'
00:18:47.643    13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs'
00:18:47.643    13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs'
00:18:47.643    13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}'
00:18:47.643   13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 ))
00:18:47.643    13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs'
00:18:47.643    13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs'
00:18:47.643    13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs'
00:18:47.643    13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}'
00:18:47.643   13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 ))
00:18:47.643   13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == rdma ']'
00:18:47.643    13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype'
00:18:47.643    13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype'
00:18:47.643    13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype'
00:18:47.643    13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l
00:18:47.643   13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # (( 1 == 1 ))
00:18:47.643    13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype'
00:18:47.643   13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # transport_type=RDMA
00:18:47.643   13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]]
00:18:47.643    13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name'
00:18:47.643    13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name'
00:18:47.643    13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name'
00:18:47.643    13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l
00:18:47.643   13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # (( 2 > 0 ))
00:18:47.643   13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64
00:18:47.643   13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512
00:18:47.643   13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1
00:18:47.643   13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:47.643   13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:18:47.902  Malloc1
00:18:47.902   13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:47.902   13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME
00:18:47.902   13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:47.902   13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:18:47.902   13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:47.902   13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1
00:18:47.902   13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:47.902   13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:18:47.902   13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:47.902   13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1
00:18:47.902   13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:47.902   13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:18:47.902   13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:47.902   13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420
00:18:47.902   13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:47.902   13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:18:47.902  [2024-12-14 13:44:47.435298] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 ***
00:18:47.902   13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:47.902   13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420
00:18:47.902   13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0
00:18:47.902   13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420
00:18:47.902   13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme
00:18:47.902   13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:47.902    13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme
00:18:47.902   13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:47.902    13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme
00:18:47.902   13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:47.902   13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme
00:18:47.902   13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]]
00:18:47.902   13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420
00:18:47.902  [2024-12-14 13:44:47.481893] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e'
00:18:47.902  Failed to write to /dev/nvme-fabrics: Input/output error
00:18:47.902  could not add new controller: failed to write to nvme-fabrics device
00:18:47.902   13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1
00:18:47.902   13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:18:47.902   13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:18:47.902   13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:18:47.902   13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:18:47.902   13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:47.902   13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:18:47.902   13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:47.902   13:44:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420
00:18:48.837   13:44:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME
00:18:48.837   13:44:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0
00:18:48.837   13:44:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:18:48.837   13:44:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:18:48.837   13:44:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2
00:18:51.367   13:44:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:18:51.367    13:44:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:18:51.367    13:44:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:18:51.367   13:44:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:18:51.367   13:44:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:18:51.367   13:44:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0
00:18:51.367   13:44:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:18:51.933  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:18:51.933   13:44:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:18:51.933   13:44:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0
00:18:51.933   13:44:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:18:51.933   13:44:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME
00:18:51.933   13:44:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:18:51.933   13:44:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME
00:18:51.933   13:44:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0
00:18:51.933   13:44:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:18:51.933   13:44:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:51.933   13:44:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:18:51.933   13:44:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:51.933   13:44:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420
00:18:51.933   13:44:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0
00:18:51.933   13:44:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420
00:18:51.933   13:44:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme
00:18:51.933   13:44:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:51.933    13:44:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme
00:18:51.933   13:44:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:51.933    13:44:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme
00:18:51.933   13:44:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:51.933   13:44:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme
00:18:51.933   13:44:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]]
00:18:51.933   13:44:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420
00:18:51.933  [2024-12-14 13:44:51.593935] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e'
00:18:51.933  Failed to write to /dev/nvme-fabrics: Input/output error
00:18:51.933  could not add new controller: failed to write to nvme-fabrics device
00:18:51.933   13:44:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1
00:18:51.933   13:44:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:18:51.933   13:44:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:18:51.933   13:44:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:18:51.933   13:44:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1
00:18:51.933   13:44:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:51.933   13:44:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:18:51.933   13:44:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:51.933   13:44:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420
00:18:53.308   13:44:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME
00:18:53.308   13:44:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0
00:18:53.308   13:44:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:18:53.308   13:44:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:18:53.308   13:44:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2
00:18:55.241   13:44:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:18:55.241    13:44:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:18:55.241    13:44:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:18:55.241   13:44:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:18:55.241   13:44:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:18:55.241   13:44:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0
00:18:55.241   13:44:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:18:56.174  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:18:56.174   13:44:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:18:56.174   13:44:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0
00:18:56.175   13:44:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:18:56.175   13:44:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME
00:18:56.175   13:44:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:18:56.175   13:44:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME
00:18:56.175   13:44:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0
00:18:56.175   13:44:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:18:56.175   13:44:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:56.175   13:44:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:18:56.175   13:44:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:56.175    13:44:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5
00:18:56.175   13:44:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops)
00:18:56.175   13:44:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME
00:18:56.175   13:44:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:56.175   13:44:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:18:56.175   13:44:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:56.175   13:44:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420
00:18:56.175   13:44:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:56.175   13:44:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:18:56.175  [2024-12-14 13:44:55.670089] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 ***
00:18:56.175   13:44:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:56.175   13:44:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5
00:18:56.175   13:44:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:56.175   13:44:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:18:56.175   13:44:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:56.175   13:44:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1
00:18:56.175   13:44:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:56.175   13:44:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:18:56.175   13:44:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:56.175   13:44:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420
00:18:57.109   13:44:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME
00:18:57.109   13:44:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0
00:18:57.109   13:44:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:18:57.109   13:44:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:18:57.109   13:44:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2
00:18:59.010   13:44:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:18:59.010    13:44:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:18:59.010    13:44:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:18:59.010   13:44:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:18:59.010   13:44:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:18:59.010   13:44:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0
00:18:59.010   13:44:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:18:59.944  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:18:59.944   13:44:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:18:59.944   13:44:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0
00:18:59.944   13:44:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:18:59.944   13:44:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME
00:18:59.944   13:44:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:19:00.203   13:44:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME
00:19:00.203   13:44:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0
00:19:00.203   13:44:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:19:00.203   13:44:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:00.203   13:44:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:19:00.203   13:44:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:00.203   13:44:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:19:00.203   13:44:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:00.203   13:44:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:19:00.203   13:44:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:00.203   13:44:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops)
00:19:00.203   13:44:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME
00:19:00.203   13:44:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:00.203   13:44:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:19:00.203   13:44:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:00.203   13:44:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420
00:19:00.203   13:44:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:00.203   13:44:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:19:00.203  [2024-12-14 13:44:59.731393] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 ***
00:19:00.203   13:44:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:00.203   13:44:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5
00:19:00.203   13:44:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:00.203   13:44:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:19:00.203   13:44:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:00.203   13:44:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1
00:19:00.203   13:44:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:00.203   13:44:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:19:00.203   13:44:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:00.203   13:44:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420
00:19:01.137   13:45:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME
00:19:01.137   13:45:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0
00:19:01.137   13:45:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:19:01.137   13:45:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:19:01.137   13:45:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2
00:19:03.038   13:45:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:19:03.038    13:45:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:19:03.038    13:45:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:19:03.038   13:45:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:19:03.038   13:45:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:19:03.038   13:45:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0
00:19:03.038   13:45:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:19:03.973  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:19:03.973   13:45:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:19:03.973   13:45:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0
00:19:03.973   13:45:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:19:04.231   13:45:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME
00:19:04.231   13:45:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:19:04.231   13:45:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME
00:19:04.231   13:45:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0
00:19:04.231   13:45:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:19:04.231   13:45:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:04.231   13:45:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:19:04.231   13:45:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:04.231   13:45:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:19:04.231   13:45:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:04.231   13:45:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:19:04.231   13:45:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:04.231   13:45:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops)
00:19:04.231   13:45:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME
00:19:04.231   13:45:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:04.231   13:45:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:19:04.231   13:45:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:04.231   13:45:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420
00:19:04.231   13:45:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:04.231   13:45:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:19:04.231  [2024-12-14 13:45:03.777887] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 ***
00:19:04.231   13:45:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:04.231   13:45:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5
00:19:04.231   13:45:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:04.231   13:45:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:19:04.231   13:45:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:04.232   13:45:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1
00:19:04.232   13:45:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:04.232   13:45:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:19:04.232   13:45:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:04.232   13:45:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420
00:19:05.166   13:45:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME
00:19:05.166   13:45:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0
00:19:05.166   13:45:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:19:05.166   13:45:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:19:05.166   13:45:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2
00:19:07.066   13:45:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:19:07.066    13:45:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:19:07.066    13:45:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:19:07.066   13:45:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:19:07.066   13:45:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:19:07.066   13:45:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0
00:19:07.066   13:45:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:19:08.440  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:19:08.440   13:45:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:19:08.440   13:45:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0
00:19:08.440   13:45:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:19:08.440   13:45:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME
00:19:08.440   13:45:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:19:08.440   13:45:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME
00:19:08.440   13:45:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0
00:19:08.440   13:45:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:19:08.440   13:45:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:08.440   13:45:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:19:08.440   13:45:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:08.440   13:45:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:19:08.440   13:45:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:08.440   13:45:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:19:08.440   13:45:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:08.440   13:45:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops)
00:19:08.440   13:45:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME
00:19:08.440   13:45:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:08.440   13:45:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:19:08.440   13:45:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:08.440   13:45:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420
00:19:08.440   13:45:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:08.440   13:45:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:19:08.440  [2024-12-14 13:45:07.837197] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 ***
00:19:08.440   13:45:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:08.440   13:45:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5
00:19:08.440   13:45:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:08.440   13:45:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:19:08.440   13:45:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:08.440   13:45:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1
00:19:08.440   13:45:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:08.440   13:45:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:19:08.440   13:45:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:08.440   13:45:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420
00:19:09.374   13:45:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME
00:19:09.374   13:45:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0
00:19:09.375   13:45:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:19:09.375   13:45:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:19:09.375   13:45:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2
00:19:11.275   13:45:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:19:11.275    13:45:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:19:11.275    13:45:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:19:11.275   13:45:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:19:11.275   13:45:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:19:11.275   13:45:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0
00:19:11.275   13:45:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:19:12.208  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:19:12.208   13:45:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:19:12.208   13:45:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0
00:19:12.208   13:45:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:19:12.208   13:45:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME
00:19:12.208   13:45:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:19:12.208   13:45:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME
00:19:12.208   13:45:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0
00:19:12.208   13:45:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:19:12.208   13:45:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:12.208   13:45:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:19:12.208   13:45:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:12.208   13:45:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:19:12.208   13:45:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:12.208   13:45:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:19:12.208   13:45:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:12.208   13:45:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops)
00:19:12.208   13:45:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME
00:19:12.208   13:45:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:12.208   13:45:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:19:12.208   13:45:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:12.208   13:45:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420
00:19:12.208   13:45:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:12.208   13:45:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:19:12.208  [2024-12-14 13:45:11.881303] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 ***
00:19:12.208   13:45:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:12.208   13:45:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5
00:19:12.208   13:45:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:12.208   13:45:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:19:12.208   13:45:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:12.208   13:45:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1
00:19:12.208   13:45:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:12.208   13:45:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:19:12.208   13:45:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:12.208   13:45:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420
00:19:13.142   13:45:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME
00:19:13.142   13:45:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0
00:19:13.142   13:45:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:19:13.142   13:45:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:19:13.142   13:45:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2
00:19:15.670   13:45:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:19:15.670    13:45:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:19:15.670    13:45:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:19:15.670   13:45:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:19:15.670   13:45:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:19:15.670   13:45:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0
00:19:15.670   13:45:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:19:16.236  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:19:16.236   13:45:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:19:16.236   13:45:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0
00:19:16.236   13:45:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:19:16.236   13:45:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME
00:19:16.236   13:45:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:19:16.236   13:45:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME
00:19:16.236   13:45:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0
00:19:16.236   13:45:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:19:16.236   13:45:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:16.236   13:45:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:19:16.236   13:45:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:16.236   13:45:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:19:16.236   13:45:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:16.236   13:45:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:19:16.236   13:45:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:16.236    13:45:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5
00:19:16.236   13:45:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops)
00:19:16.236   13:45:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME
00:19:16.236   13:45:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:16.236   13:45:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:19:16.236   13:45:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:16.236   13:45:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420
00:19:16.236   13:45:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:16.236   13:45:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:19:16.236  [2024-12-14 13:45:15.950724] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 ***
00:19:16.236   13:45:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:16.236   13:45:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1
00:19:16.236   13:45:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:16.236   13:45:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:19:16.236   13:45:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:16.236   13:45:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1
00:19:16.236   13:45:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:16.236   13:45:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:19:16.495   13:45:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:16.495   13:45:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:19:16.495   13:45:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:16.495   13:45:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:19:16.495   13:45:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:16.495   13:45:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:19:16.495   13:45:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:16.495   13:45:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:19:16.495   13:45:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:16.495   13:45:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops)
00:19:16.495   13:45:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME
00:19:16.495   13:45:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:16.495   13:45:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:19:16.495   13:45:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:16.495   13:45:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420
00:19:16.495   13:45:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:16.495   13:45:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:19:16.495  [2024-12-14 13:45:16.006946] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 ***
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops)
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:19:16.495  [2024-12-14 13:45:16.059095] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 ***
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops)
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:19:16.495  [2024-12-14 13:45:16.111308] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 ***
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops)
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:16.495   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:19:16.496  [2024-12-14 13:45:16.163539] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 ***
00:19:16.496   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:16.496   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1
00:19:16.496   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:16.496   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:19:16.496   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:16.496   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1
00:19:16.496   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:16.496   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:19:16.496   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:16.496   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:19:16.496   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:16.496   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:19:16.496   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:16.496   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:19:16.496   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:16.496   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:19:16.496   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:16.496    13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats
00:19:16.496    13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:16.496    13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:19:16.754    13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:16.754   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{
00:19:16.754  "tick_rate": 2500000000,
00:19:16.754  "poll_groups": [
00:19:16.754  {
00:19:16.754  "name": "nvmf_tgt_poll_group_000",
00:19:16.754  "admin_qpairs": 2,
00:19:16.754  "io_qpairs": 27,
00:19:16.754  "current_admin_qpairs": 0,
00:19:16.754  "current_io_qpairs": 0,
00:19:16.754  "pending_bdev_io": 0,
00:19:16.754  "completed_nvme_io": 122,
00:19:16.754  "transports": [
00:19:16.754  {
00:19:16.754  "trtype": "RDMA",
00:19:16.754  "pending_data_buffer": 0,
00:19:16.754  "devices": [
00:19:16.754  {
00:19:16.754  "name": "mlx5_0",
00:19:16.754  "polls": 3247254,
00:19:16.754  "idle_polls": 3246935,
00:19:16.754  "completions": 357,
00:19:16.754  "requests": 178,
00:19:16.754  "request_latency": 44767106,
00:19:16.754  "pending_free_request": 0,
00:19:16.754  "pending_rdma_read": 0,
00:19:16.754  "pending_rdma_write": 0,
00:19:16.754  "pending_rdma_send": 0,
00:19:16.754  "total_send_wrs": 300,
00:19:16.754  "send_doorbell_updates": 158,
00:19:16.754  "total_recv_wrs": 4274,
00:19:16.754  "recv_doorbell_updates": 158
00:19:16.754  },
00:19:16.754  {
00:19:16.754  "name": "mlx5_1",
00:19:16.754  "polls": 3247254,
00:19:16.754  "idle_polls": 3247254,
00:19:16.754  "completions": 0,
00:19:16.754  "requests": 0,
00:19:16.754  "request_latency": 0,
00:19:16.754  "pending_free_request": 0,
00:19:16.754  "pending_rdma_read": 0,
00:19:16.754  "pending_rdma_write": 0,
00:19:16.754  "pending_rdma_send": 0,
00:19:16.754  "total_send_wrs": 0,
00:19:16.754  "send_doorbell_updates": 0,
00:19:16.754  "total_recv_wrs": 4096,
00:19:16.754  "recv_doorbell_updates": 1
00:19:16.754  }
00:19:16.754  ]
00:19:16.754  }
00:19:16.754  ]
00:19:16.754  },
00:19:16.754  {
00:19:16.754  "name": "nvmf_tgt_poll_group_001",
00:19:16.754  "admin_qpairs": 2,
00:19:16.754  "io_qpairs": 26,
00:19:16.754  "current_admin_qpairs": 0,
00:19:16.754  "current_io_qpairs": 0,
00:19:16.754  "pending_bdev_io": 0,
00:19:16.754  "completed_nvme_io": 126,
00:19:16.754  "transports": [
00:19:16.754  {
00:19:16.754  "trtype": "RDMA",
00:19:16.754  "pending_data_buffer": 0,
00:19:16.754  "devices": [
00:19:16.754  {
00:19:16.754  "name": "mlx5_0",
00:19:16.754  "polls": 3112409,
00:19:16.754  "idle_polls": 3112092,
00:19:16.754  "completions": 358,
00:19:16.754  "requests": 179,
00:19:16.754  "request_latency": 47279306,
00:19:16.754  "pending_free_request": 0,
00:19:16.754  "pending_rdma_read": 0,
00:19:16.754  "pending_rdma_write": 0,
00:19:16.754  "pending_rdma_send": 0,
00:19:16.754  "total_send_wrs": 304,
00:19:16.754  "send_doorbell_updates": 154,
00:19:16.754  "total_recv_wrs": 4275,
00:19:16.754  "recv_doorbell_updates": 155
00:19:16.754  },
00:19:16.754  {
00:19:16.754  "name": "mlx5_1",
00:19:16.754  "polls": 3112409,
00:19:16.754  "idle_polls": 3112409,
00:19:16.754  "completions": 0,
00:19:16.754  "requests": 0,
00:19:16.754  "request_latency": 0,
00:19:16.754  "pending_free_request": 0,
00:19:16.754  "pending_rdma_read": 0,
00:19:16.754  "pending_rdma_write": 0,
00:19:16.754  "pending_rdma_send": 0,
00:19:16.754  "total_send_wrs": 0,
00:19:16.754  "send_doorbell_updates": 0,
00:19:16.754  "total_recv_wrs": 4096,
00:19:16.754  "recv_doorbell_updates": 1
00:19:16.754  }
00:19:16.754  ]
00:19:16.755  }
00:19:16.755  ]
00:19:16.755  },
00:19:16.755  {
00:19:16.755  "name": "nvmf_tgt_poll_group_002",
00:19:16.755  "admin_qpairs": 1,
00:19:16.755  "io_qpairs": 26,
00:19:16.755  "current_admin_qpairs": 0,
00:19:16.755  "current_io_qpairs": 0,
00:19:16.755  "pending_bdev_io": 0,
00:19:16.755  "completed_nvme_io": 125,
00:19:16.755  "transports": [
00:19:16.755  {
00:19:16.755  "trtype": "RDMA",
00:19:16.755  "pending_data_buffer": 0,
00:19:16.755  "devices": [
00:19:16.755  {
00:19:16.755  "name": "mlx5_0",
00:19:16.755  "polls": 3271215,
00:19:16.755  "idle_polls": 3270949,
00:19:16.755  "completions": 307,
00:19:16.755  "requests": 153,
00:19:16.755  "request_latency": 43506620,
00:19:16.755  "pending_free_request": 0,
00:19:16.755  "pending_rdma_read": 0,
00:19:16.755  "pending_rdma_write": 0,
00:19:16.755  "pending_rdma_send": 0,
00:19:16.755  "total_send_wrs": 266,
00:19:16.755  "send_doorbell_updates": 129,
00:19:16.755  "total_recv_wrs": 4249,
00:19:16.755  "recv_doorbell_updates": 129
00:19:16.755  },
00:19:16.755  {
00:19:16.755  "name": "mlx5_1",
00:19:16.755  "polls": 3271215,
00:19:16.755  "idle_polls": 3271215,
00:19:16.755  "completions": 0,
00:19:16.755  "requests": 0,
00:19:16.755  "request_latency": 0,
00:19:16.755  "pending_free_request": 0,
00:19:16.755  "pending_rdma_read": 0,
00:19:16.755  "pending_rdma_write": 0,
00:19:16.755  "pending_rdma_send": 0,
00:19:16.755  "total_send_wrs": 0,
00:19:16.755  "send_doorbell_updates": 0,
00:19:16.755  "total_recv_wrs": 4096,
00:19:16.755  "recv_doorbell_updates": 1
00:19:16.755  }
00:19:16.755  ]
00:19:16.755  }
00:19:16.755  ]
00:19:16.755  },
00:19:16.755  {
00:19:16.755  "name": "nvmf_tgt_poll_group_003",
00:19:16.755  "admin_qpairs": 2,
00:19:16.755  "io_qpairs": 26,
00:19:16.755  "current_admin_qpairs": 0,
00:19:16.755  "current_io_qpairs": 0,
00:19:16.755  "pending_bdev_io": 0,
00:19:16.755  "completed_nvme_io": 82,
00:19:16.755  "transports": [
00:19:16.755  {
00:19:16.755  "trtype": "RDMA",
00:19:16.755  "pending_data_buffer": 0,
00:19:16.755  "devices": [
00:19:16.755  {
00:19:16.755  "name": "mlx5_0",
00:19:16.755  "polls": 2437133,
00:19:16.755  "idle_polls": 2436889,
00:19:16.755  "completions": 270,
00:19:16.755  "requests": 135,
00:19:16.755  "request_latency": 32662308,
00:19:16.755  "pending_free_request": 0,
00:19:16.755  "pending_rdma_read": 0,
00:19:16.755  "pending_rdma_write": 0,
00:19:16.755  "pending_rdma_send": 0,
00:19:16.755  "total_send_wrs": 215,
00:19:16.755  "send_doorbell_updates": 120,
00:19:16.755  "total_recv_wrs": 4231,
00:19:16.755  "recv_doorbell_updates": 121
00:19:16.755  },
00:19:16.755  {
00:19:16.755  "name": "mlx5_1",
00:19:16.755  "polls": 2437133,
00:19:16.755  "idle_polls": 2437133,
00:19:16.755  "completions": 0,
00:19:16.755  "requests": 0,
00:19:16.755  "request_latency": 0,
00:19:16.755  "pending_free_request": 0,
00:19:16.755  "pending_rdma_read": 0,
00:19:16.755  "pending_rdma_write": 0,
00:19:16.755  "pending_rdma_send": 0,
00:19:16.755  "total_send_wrs": 0,
00:19:16.755  "send_doorbell_updates": 0,
00:19:16.755  "total_recv_wrs": 4096,
00:19:16.755  "recv_doorbell_updates": 1
00:19:16.755  }
00:19:16.755  ]
00:19:16.755  }
00:19:16.755  ]
00:19:16.755  }
00:19:16.755  ]
00:19:16.755  }'
00:19:16.755    13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs'
00:19:16.755    13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs'
00:19:16.755    13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs'
00:19:16.755    13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}'
00:19:16.755   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 ))
00:19:16.755    13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs'
00:19:16.755    13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs'
00:19:16.755    13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs'
00:19:16.755    13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}'
00:19:16.755   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 105 > 0 ))
00:19:16.755   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == rdma ']'
00:19:16.755    13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions'
00:19:16.755    13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions'
00:19:16.755    13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions'
00:19:16.755    13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}'
00:19:16.755   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # (( 1292 > 0 ))
00:19:16.755    13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency'
00:19:16.755    13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency'
00:19:16.755    13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}'
00:19:16.755    13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency'
00:19:16.755   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # (( 168215340 > 0 ))
00:19:16.755   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT
00:19:16.755   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini
00:19:16.755   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup
00:19:16.755   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync
00:19:16.755   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' rdma == tcp ']'
00:19:16.755   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' rdma == rdma ']'
00:19:16.755   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e
00:19:16.755   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20}
00:19:16.755   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma
00:19:16.755  rmmod nvme_rdma
00:19:16.755  rmmod nvme_fabrics
00:19:16.755   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:19:16.755   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e
00:19:16.755   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0
00:19:16.755   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 3309362 ']'
00:19:16.755   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 3309362
00:19:16.755   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 3309362 ']'
00:19:16.755   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 3309362
00:19:17.013    13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname
00:19:17.013   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:19:17.013    13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3309362
00:19:17.013   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:19:17.013   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:19:17.013   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3309362'
00:19:17.013  killing process with pid 3309362
00:19:17.013   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 3309362
00:19:17.013   13:45:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 3309362
00:19:18.913   13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:19:18.913   13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]]
00:19:18.913  
00:19:18.913  real	0m39.139s
00:19:18.913  user	2m8.938s
00:19:18.913  sys	0m6.764s
00:19:18.913   13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:19:18.913   13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:19:18.913  ************************************
00:19:18.913  END TEST nvmf_rpc
00:19:18.913  ************************************
00:19:18.913   13:45:18 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma
00:19:18.913   13:45:18 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:19:18.913   13:45:18 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:19:18.913   13:45:18 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:19:18.913  ************************************
00:19:18.913  START TEST nvmf_invalid
00:19:18.913  ************************************
00:19:18.913   13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma
00:19:18.913  * Looking for test storage...
00:19:18.913  * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target
00:19:18.913    13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:19:18.913     13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version
00:19:18.913     13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:19:18.913    13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:19:18.913    13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:19:18.913    13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l
00:19:18.913    13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l
00:19:18.913    13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-:
00:19:18.913    13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1
00:19:18.913    13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-:
00:19:18.913    13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2
00:19:18.913    13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<'
00:19:18.913    13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2
00:19:18.913    13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1
00:19:18.913    13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:19:18.913    13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in
00:19:18.913    13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1
00:19:18.913    13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 ))
00:19:18.913    13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:19:18.913     13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1
00:19:18.913     13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1
00:19:18.913     13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:19:18.913     13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1
00:19:18.913    13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1
00:19:18.913     13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2
00:19:18.913     13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2
00:19:18.913     13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:19:18.913     13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2
00:19:19.172    13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2
00:19:19.172    13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:19:19.172    13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:19:19.172    13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0
00:19:19.172    13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:19:19.172    13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:19:19.172  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:19.172  		--rc genhtml_branch_coverage=1
00:19:19.172  		--rc genhtml_function_coverage=1
00:19:19.172  		--rc genhtml_legend=1
00:19:19.172  		--rc geninfo_all_blocks=1
00:19:19.172  		--rc geninfo_unexecuted_blocks=1
00:19:19.172  		
00:19:19.172  		'
00:19:19.172    13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:19:19.172  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:19.172  		--rc genhtml_branch_coverage=1
00:19:19.172  		--rc genhtml_function_coverage=1
00:19:19.172  		--rc genhtml_legend=1
00:19:19.172  		--rc geninfo_all_blocks=1
00:19:19.172  		--rc geninfo_unexecuted_blocks=1
00:19:19.172  		
00:19:19.172  		'
00:19:19.172    13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:19:19.172  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:19.172  		--rc genhtml_branch_coverage=1
00:19:19.172  		--rc genhtml_function_coverage=1
00:19:19.172  		--rc genhtml_legend=1
00:19:19.172  		--rc geninfo_all_blocks=1
00:19:19.172  		--rc geninfo_unexecuted_blocks=1
00:19:19.172  		
00:19:19.172  		'
00:19:19.172    13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:19:19.172  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:19.172  		--rc genhtml_branch_coverage=1
00:19:19.172  		--rc genhtml_function_coverage=1
00:19:19.172  		--rc genhtml_legend=1
00:19:19.172  		--rc geninfo_all_blocks=1
00:19:19.172  		--rc geninfo_unexecuted_blocks=1
00:19:19.172  		
00:19:19.172  		'
00:19:19.172   13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh
00:19:19.172     13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s
00:19:19.172    13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:19:19.172    13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:19:19.172    13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:19:19.172    13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:19:19.172    13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:19:19.172    13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:19:19.172    13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:19:19.172    13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:19:19.172    13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:19:19.172     13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:19:19.172    13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:19:19.172    13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e
00:19:19.172    13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:19:19.172    13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:19:19.172    13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:19:19.172    13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:19:19.172    13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh
00:19:19.172     13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob
00:19:19.172     13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:19:19.172     13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:19:19.172     13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:19:19.172      13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:19:19.172      13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:19:19.172      13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:19:19.172      13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH
00:19:19.172      13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:19:19.173    13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0
00:19:19.173    13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:19:19.173    13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:19:19.173    13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:19:19.173    13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:19:19.173    13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:19:19.173    13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:19:19.173  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:19:19.173    13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:19:19.173    13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:19:19.173    13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0
00:19:19.173   13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py
00:19:19.173   13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py
00:19:19.173   13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode
00:19:19.173   13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar
00:19:19.173   13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0
00:19:19.173   13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit
00:19:19.173   13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z rdma ']'
00:19:19.173   13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:19:19.173   13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs
00:19:19.173   13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no
00:19:19.173   13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns
00:19:19.173   13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:19:19.173   13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:19:19.173    13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:19:19.173   13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:19:19.173   13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:19:19.173   13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable
00:19:19.173   13:45:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x
00:19:25.736   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:19:25.736   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=()
00:19:25.736   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs
00:19:25.736   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=()
00:19:25.736   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:19:25.736   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=()
00:19:25.736   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers
00:19:25.736   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=()
00:19:25.736   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs
00:19:25.736   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=()
00:19:25.736   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810
00:19:25.736   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=()
00:19:25.736   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722
00:19:25.736   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=()
00:19:25.736   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx
00:19:25.736   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:19:25.736   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:19:25.736   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:19:25.736   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:19:25.736   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:19:25.736   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:19:25.736   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:19:25.736   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:19:25.736   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:19:25.736   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:19:25.736   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:19:25.736   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:19:25.736   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:19:25.736   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ rdma == rdma ]]
00:19:25.736   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}")
00:19:25.736   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}")
00:19:25.736   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]]
00:19:25.736   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}")
00:19:25.736   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:19:25.736   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:19:25.736   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)'
00:19:25.736  Found 0000:d9:00.0 (0x15b3 - 0x1015)
00:19:25.736   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:19:25.736   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:19:25.736   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:19:25.736   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:19:25.736   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:19:25.736   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:19:25.736   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:19:25.736   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)'
00:19:25.736  Found 0000:d9:00.1 (0x15b3 - 0x1015)
00:19:25.736   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:19:25.736   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:19:25.736   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:19:25.736   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:19:25.736   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:19:25.736   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:19:25.737   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:19:25.737   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]]
00:19:25.737   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:19:25.737   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:19:25.737   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:19:25.737   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:19:25.737   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:19:25.737   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0'
00:19:25.737  Found net devices under 0000:d9:00.0: mlx_0_0
00:19:25.737   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:19:25.737   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:19:25.737   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:19:25.737   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:19:25.737   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:19:25.737   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:19:25.737   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1'
00:19:25.737  Found net devices under 0000:d9:00.1: mlx_0_1
00:19:25.737   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:19:25.737   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:19:25.737   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes
00:19:25.737   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:19:25.737   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ rdma == tcp ]]
00:19:25.737   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@447 -- # [[ rdma == rdma ]]
00:19:25.737   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # rdma_device_init
00:19:25.737   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@529 -- # load_ib_rdma_modules
00:19:25.737    13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@62 -- # uname
00:19:25.737   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']'
00:19:25.737   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@66 -- # modprobe ib_cm
00:19:25.737   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@67 -- # modprobe ib_core
00:19:25.737   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@68 -- # modprobe ib_umad
00:19:25.737   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@69 -- # modprobe ib_uverbs
00:19:25.737   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@70 -- # modprobe iw_cm
00:19:25.737   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@71 -- # modprobe rdma_cm
00:19:25.737   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@72 -- # modprobe rdma_ucm
00:19:25.737   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@530 -- # allocate_nic_ips
00:19:25.737   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR ))
00:19:25.737    13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # get_rdma_if_list
00:19:25.737    13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:19:25.737    13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:19:25.737     13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:19:25.737     13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:19:25.737    13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:19:25.737    13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:19:25.737    13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:19:25.737    13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:19:25.737    13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_0
00:19:25.737    13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2
00:19:25.737    13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:19:25.737    13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:19:25.737    13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:19:25.737    13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:19:25.737    13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:19:25.737    13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_1
00:19:25.737    13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2
00:19:25.737   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:19:25.737    13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0
00:19:25.737    13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:19:25.737    13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:19:25.737    13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}'
00:19:25.737    13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1
00:19:25.737   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # ip=192.168.100.8
00:19:25.737   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]]
00:19:25.737   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@85 -- # ip addr show mlx_0_0
00:19:25.737  6: mlx_0_0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:19:25.737      link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff
00:19:25.737      altname enp217s0f0np0
00:19:25.737      altname ens818f0np0
00:19:25.737      inet 192.168.100.8/24 scope global mlx_0_0
00:19:25.737         valid_lft forever preferred_lft forever
00:19:25.737   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:19:25.737    13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1
00:19:25.737    13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:19:25.737    13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:19:25.737    13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}'
00:19:25.737    13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1
00:19:25.737   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # ip=192.168.100.9
00:19:25.737   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]]
00:19:25.737   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@85 -- # ip addr show mlx_0_1
00:19:25.737  7: mlx_0_1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:19:25.737      link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff
00:19:25.737      altname enp217s0f1np1
00:19:25.737      altname ens818f1np1
00:19:25.737      inet 192.168.100.9/24 scope global mlx_0_1
00:19:25.737         valid_lft forever preferred_lft forever
00:19:25.737   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0
00:19:25.737   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:19:25.737   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma'
00:19:25.737   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]]
00:19:25.737    13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # get_available_rdma_ips
00:19:25.737     13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # get_rdma_if_list
00:19:25.737     13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:19:25.737     13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:19:25.737      13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:19:25.737      13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:19:25.996     13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:19:25.996     13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:19:25.996     13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:19:25.996     13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:19:25.996     13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_0
00:19:25.996     13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2
00:19:25.996     13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:19:25.996     13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:19:25.996     13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:19:25.996     13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:19:25.996     13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:19:25.996     13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_1
00:19:25.996     13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2
00:19:25.996    13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:19:25.996    13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0
00:19:25.996    13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:19:25.996    13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:19:25.996    13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}'
00:19:25.996    13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1
00:19:25.996    13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:19:25.996    13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1
00:19:25.996    13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:19:25.996    13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:19:25.996    13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}'
00:19:25.996    13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1
00:19:25.996   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8
00:19:25.996  192.168.100.9'
00:19:25.996    13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # echo '192.168.100.8
00:19:25.996  192.168.100.9'
00:19:25.996    13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # head -n 1
00:19:25.996   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8
00:19:25.996    13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # echo '192.168.100.8
00:19:25.996  192.168.100.9'
00:19:25.996    13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # tail -n +2
00:19:25.996    13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # head -n 1
00:19:25.996   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9
00:19:25.996   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']'
00:19:25.996   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024'
00:19:25.996   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' rdma == tcp ']'
00:19:25.996   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' rdma == rdma ']'
00:19:25.996   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-rdma
00:19:25.996   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF
00:19:25.996   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:19:25.996   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable
00:19:25.996   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x
00:19:25.996   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=3318844
00:19:25.996   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:19:25.996   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 3318844
00:19:25.996   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 3318844 ']'
00:19:25.996   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:19:25.996   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100
00:19:25.996   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:19:25.996  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:19:25.996   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable
00:19:25.996   13:45:25 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x
00:19:25.996  [2024-12-14 13:45:25.657100] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:19:25.996  [2024-12-14 13:45:25.657191] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:19:26.255  [2024-12-14 13:45:25.788429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:19:26.255  [2024-12-14 13:45:25.891944] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:19:26.255  [2024-12-14 13:45:25.891996] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:19:26.255  [2024-12-14 13:45:25.892009] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:19:26.255  [2024-12-14 13:45:25.892023] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:19:26.255  [2024-12-14 13:45:25.892033] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:19:26.255  [2024-12-14 13:45:25.897965] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:19:26.255  [2024-12-14 13:45:25.897986] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:19:26.255  [2024-12-14 13:45:25.898072] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:19:26.255  [2024-12-14 13:45:25.898083] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:19:26.821   13:45:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:19:26.821   13:45:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0
00:19:26.821   13:45:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:19:26.821   13:45:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable
00:19:26.821   13:45:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x
00:19:26.821   13:45:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:19:26.821   13:45:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT
00:19:26.821    13:45:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode30464
00:19:27.078  [2024-12-14 13:45:26.679557] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar
00:19:27.078   13:45:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request:
00:19:27.078  {
00:19:27.078    "nqn": "nqn.2016-06.io.spdk:cnode30464",
00:19:27.078    "tgt_name": "foobar",
00:19:27.078    "method": "nvmf_create_subsystem",
00:19:27.078    "req_id": 1
00:19:27.078  }
00:19:27.078  Got JSON-RPC error response
00:19:27.078  response:
00:19:27.078  {
00:19:27.078    "code": -32603,
00:19:27.078    "message": "Unable to find target foobar"
00:19:27.078  }'
00:19:27.078   13:45:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request:
00:19:27.078  {
00:19:27.078    "nqn": "nqn.2016-06.io.spdk:cnode30464",
00:19:27.078    "tgt_name": "foobar",
00:19:27.078    "method": "nvmf_create_subsystem",
00:19:27.078    "req_id": 1
00:19:27.078  }
00:19:27.078  Got JSON-RPC error response
00:19:27.078  response:
00:19:27.078  {
00:19:27.078    "code": -32603,
00:19:27.078    "message": "Unable to find target foobar"
00:19:27.078  } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]]
00:19:27.078     13:45:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f'
00:19:27.078    13:45:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode2290
00:19:27.335  [2024-12-14 13:45:26.880291] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2290: invalid serial number 'SPDKISFASTANDAWESOME'
00:19:27.335   13:45:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request:
00:19:27.335  {
00:19:27.335    "nqn": "nqn.2016-06.io.spdk:cnode2290",
00:19:27.335    "serial_number": "SPDKISFASTANDAWESOME\u001f",
00:19:27.335    "method": "nvmf_create_subsystem",
00:19:27.335    "req_id": 1
00:19:27.335  }
00:19:27.335  Got JSON-RPC error response
00:19:27.335  response:
00:19:27.335  {
00:19:27.335    "code": -32602,
00:19:27.335    "message": "Invalid SN SPDKISFASTANDAWESOME\u001f"
00:19:27.335  }'
00:19:27.335   13:45:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request:
00:19:27.335  {
00:19:27.335    "nqn": "nqn.2016-06.io.spdk:cnode2290",
00:19:27.335    "serial_number": "SPDKISFASTANDAWESOME\u001f",
00:19:27.335    "method": "nvmf_create_subsystem",
00:19:27.335    "req_id": 1
00:19:27.335  }
00:19:27.335  Got JSON-RPC error response
00:19:27.335  response:
00:19:27.335  {
00:19:27.335    "code": -32602,
00:19:27.335    "message": "Invalid SN SPDKISFASTANDAWESOME\u001f"
00:19:27.335  } == *\I\n\v\a\l\i\d\ \S\N* ]]
00:19:27.335     13:45:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f'
00:19:27.335    13:45:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode3812
00:19:27.594  [2024-12-14 13:45:27.084980] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3812: invalid model number 'SPDK_Controller'
00:19:27.594   13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request:
00:19:27.594  {
00:19:27.594    "nqn": "nqn.2016-06.io.spdk:cnode3812",
00:19:27.594    "model_number": "SPDK_Controller\u001f",
00:19:27.594    "method": "nvmf_create_subsystem",
00:19:27.594    "req_id": 1
00:19:27.594  }
00:19:27.594  Got JSON-RPC error response
00:19:27.594  response:
00:19:27.594  {
00:19:27.594    "code": -32602,
00:19:27.594    "message": "Invalid MN SPDK_Controller\u001f"
00:19:27.594  }'
00:19:27.594   13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request:
00:19:27.594  {
00:19:27.594    "nqn": "nqn.2016-06.io.spdk:cnode3812",
00:19:27.594    "model_number": "SPDK_Controller\u001f",
00:19:27.594    "method": "nvmf_create_subsystem",
00:19:27.594    "req_id": 1
00:19:27.594  }
00:19:27.594  Got JSON-RPC error response
00:19:27.594  response:
00:19:27.594  {
00:19:27.594    "code": -32602,
00:19:27.594    "message": "Invalid MN SPDK_Controller\u001f"
00:19:27.594  } == *\I\n\v\a\l\i\d\ \M\N* ]]
00:19:27.594     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21
00:19:27.594     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll
00:19:27.594     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127')
00:19:27.594     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars
00:19:27.594     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string
00:19:27.594     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 ))
00:19:27.594     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:19:27.594       13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114
00:19:27.594      13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72'
00:19:27.594     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r
00:19:27.594     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:19:27.594     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:19:27.594       13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45
00:19:27.594      13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d'
00:19:27.594     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=-
00:19:27.594     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:19:27.594     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:19:27.594       13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85
00:19:27.594      13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55'
00:19:27.594     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U
00:19:27.594     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:19:27.594     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:19:27.594       13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93
00:19:27.594      13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d'
00:19:27.594     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']'
00:19:27.594     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:19:27.594     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:19:27.594       13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58
00:19:27.594      13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a'
00:19:27.594     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=:
00:19:27.594     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:19:27.594     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:19:27.594       13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48
00:19:27.594      13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30'
00:19:27.594     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0
00:19:27.594     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:19:27.594     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:19:27.594       13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46
00:19:27.594      13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e'
00:19:27.594     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=.
00:19:27.594     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:19:27.594     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:19:27.594       13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85
00:19:27.594      13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55'
00:19:27.594     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U
00:19:27.594     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:19:27.594     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:19:27.594       13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94
00:19:27.594      13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e'
00:19:27.594     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^'
00:19:27.594     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:19:27.594     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:19:27.594       13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73
00:19:27.594      13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49'
00:19:27.594     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I
00:19:27.594     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:19:27.594     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:19:27.594       13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41
00:19:27.594      13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29'
00:19:27.594     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')'
00:19:27.594     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:19:27.594     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:19:27.594       13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61
00:19:27.594      13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d'
00:19:27.594     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+==
00:19:27.594     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:19:27.595     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:19:27.595       13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84
00:19:27.595      13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54'
00:19:27.595     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T
00:19:27.595     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:19:27.595     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:19:27.595       13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116
00:19:27.595      13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74'
00:19:27.595     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t
00:19:27.595     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:19:27.595     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:19:27.595       13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81
00:19:27.595      13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51'
00:19:27.595     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q
00:19:27.595     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:19:27.595     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:19:27.595       13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47
00:19:27.595      13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f'
00:19:27.595     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/
00:19:27.595     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:19:27.595     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:19:27.595       13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83
00:19:27.595      13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53'
00:19:27.595     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S
00:19:27.595     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:19:27.595     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:19:27.595       13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87
00:19:27.595      13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57'
00:19:27.595     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W
00:19:27.595     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:19:27.595     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:19:27.595       13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86
00:19:27.595      13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56'
00:19:27.595     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V
00:19:27.595     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:19:27.595     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:19:27.595       13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34
00:19:27.595      13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22'
00:19:27.595     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"'
00:19:27.595     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:19:27.595     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:19:27.595       13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82
00:19:27.595      13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52'
00:19:27.595     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R
00:19:27.595     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:19:27.595     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:19:27.595     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ r == \- ]]
00:19:27.595     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'r-U]:0.U^I)=TtQ/SWV"R'
00:19:27.595    13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'r-U]:0.U^I)=TtQ/SWV"R' nqn.2016-06.io.spdk:cnode27644
00:19:27.853  [2024-12-14 13:45:27.454206] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27644: invalid serial number 'r-U]:0.U^I)=TtQ/SWV"R'
00:19:27.853   13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request:
00:19:27.853  {
00:19:27.853    "nqn": "nqn.2016-06.io.spdk:cnode27644",
00:19:27.853    "serial_number": "r-U]:0.U^I)=TtQ/SWV\"R",
00:19:27.853    "method": "nvmf_create_subsystem",
00:19:27.853    "req_id": 1
00:19:27.853  }
00:19:27.853  Got JSON-RPC error response
00:19:27.853  response:
00:19:27.853  {
00:19:27.853    "code": -32602,
00:19:27.853    "message": "Invalid SN r-U]:0.U^I)=TtQ/SWV\"R"
00:19:27.853  }'
00:19:27.853   13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request:
00:19:27.853  {
00:19:27.853    "nqn": "nqn.2016-06.io.spdk:cnode27644",
00:19:27.853    "serial_number": "r-U]:0.U^I)=TtQ/SWV\"R",
00:19:27.853    "method": "nvmf_create_subsystem",
00:19:27.853    "req_id": 1
00:19:27.853  }
00:19:27.853  Got JSON-RPC error response
00:19:27.853  response:
00:19:27.853  {
00:19:27.853    "code": -32602,
00:19:27.853    "message": "Invalid SN r-U]:0.U^I)=TtQ/SWV\"R"
00:19:27.853  } == *\I\n\v\a\l\i\d\ \S\N* ]]
00:19:27.853     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41
00:19:27.853     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll
00:19:27.853     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127')
00:19:27.853     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars
00:19:27.853     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string
00:19:27.853     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 ))
00:19:27.853     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:19:27.853       13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70
00:19:27.853      13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46'
00:19:27.853     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F
00:19:27.853     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:19:27.853     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:19:27.853       13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95
00:19:27.853      13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f'
00:19:27.853     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_
00:19:27.853     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:19:27.853     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:19:27.853       13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127
00:19:27.853      13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f'
00:19:27.853     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177'
00:19:27.853     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:19:27.853     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:19:27.853       13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79
00:19:27.853      13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f'
00:19:27.853     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O
00:19:27.853     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:19:27.853     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:19:27.853       13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92
00:19:27.853      13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c'
00:19:27.853     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\'
00:19:27.853     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:19:27.853     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:19:27.853       13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102
00:19:27.853      13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66'
00:19:27.853     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f
00:19:27.853     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:19:27.853     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:19:27.853       13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124
00:19:27.853      13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c'
00:19:27.853     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|'
00:19:27.854     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:19:27.854     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:19:27.854       13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100
00:19:27.854      13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64'
00:19:27.854     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d
00:19:27.854     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:19:27.854     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:19:27.854       13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110
00:19:27.854      13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e'
00:19:27.854     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n
00:19:27.854     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:19:27.854     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:19:27.854       13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62
00:19:27.854      13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e'
00:19:27.854     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>'
00:19:27.854     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:19:27.854     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:19:27.854       13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57
00:19:27.854      13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39'
00:19:27.854     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9
00:19:27.854     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:19:27.854     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:19:27.854       13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114
00:19:27.854      13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72'
00:19:27.854     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r
00:19:27.854     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:19:27.854     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:19:27.854       13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74
00:19:27.854      13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a'
00:19:27.854     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J
00:19:27.854     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:19:27.854     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:19:28.112       13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84
00:19:28.112      13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54'
00:19:28.112     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T
00:19:28.112     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:19:28.112     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:19:28.112       13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104
00:19:28.112      13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68'
00:19:28.112     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h
00:19:28.112     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:19:28.112     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:19:28.112       13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90
00:19:28.112      13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a'
00:19:28.112     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z
00:19:28.112     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:19:28.112     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:19:28.112       13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67
00:19:28.112      13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43'
00:19:28.112     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C
00:19:28.112     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:19:28.112     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:19:28.112       13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34
00:19:28.112      13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22'
00:19:28.112     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"'
00:19:28.112     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:19:28.112     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:19:28.112       13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49
00:19:28.112      13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31'
00:19:28.112     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1
00:19:28.112     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:19:28.112     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:19:28.112       13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118
00:19:28.112      13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76'
00:19:28.112     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v
00:19:28.112     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:19:28.112     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:19:28.112       13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54
00:19:28.112      13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36'
00:19:28.112     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6
00:19:28.112     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:19:28.113     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:19:28.113       13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55
00:19:28.113      13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37'
00:19:28.113     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7
00:19:28.113     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:19:28.113     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:19:28.113       13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57
00:19:28.113      13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39'
00:19:28.113     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9
00:19:28.113     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:19:28.113     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:19:28.113       13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74
00:19:28.113      13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a'
00:19:28.113     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J
00:19:28.113     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:19:28.113     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:19:28.113       13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52
00:19:28.113      13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34'
00:19:28.113     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4
00:19:28.113     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:19:28.113     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:19:28.113       13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88
00:19:28.113      13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58'
00:19:28.113     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X
00:19:28.113     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:19:28.113     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:19:28.113       13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103
00:19:28.113      13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67'
00:19:28.113     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g
00:19:28.113     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:19:28.113     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:19:28.113       13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123
00:19:28.113      13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b'
00:19:28.113     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{'
00:19:28.113     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:19:28.113     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:19:28.113       13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38
00:19:28.113      13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26'
00:19:28.113     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&'
00:19:28.113     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:19:28.113     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:19:28.113       13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47
00:19:28.113      13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f'
00:19:28.113     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/
00:19:28.113     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:19:28.113     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:19:28.113       13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48
00:19:28.113      13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30'
00:19:28.113     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0
00:19:28.113     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:19:28.113     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:19:28.113       13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117
00:19:28.113      13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75'
00:19:28.113     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u
00:19:28.113     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:19:28.113     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:19:28.113       13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106
00:19:28.113      13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a'
00:19:28.113     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j
00:19:28.113     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:19:28.113     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:19:28.113       13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116
00:19:28.113      13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74'
00:19:28.113     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t
00:19:28.113     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:19:28.113     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:19:28.113       13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40
00:19:28.113      13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28'
00:19:28.113     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='('
00:19:28.113     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:19:28.113     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:19:28.113       13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48
00:19:28.113      13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30'
00:19:28.113     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0
00:19:28.113     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:19:28.113     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:19:28.113       13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64
00:19:28.113      13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40'
00:19:28.113     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@
00:19:28.113     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:19:28.113     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:19:28.113       13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106
00:19:28.113      13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a'
00:19:28.113     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j
00:19:28.113     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:19:28.113     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:19:28.113       13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106
00:19:28.113      13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a'
00:19:28.113     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j
00:19:28.113     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:19:28.113     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:19:28.113       13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85
00:19:28.113      13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55'
00:19:28.113     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U
00:19:28.113     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:19:28.113     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:19:28.113       13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53
00:19:28.113      13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35'
00:19:28.113     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5
00:19:28.113     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:19:28.113     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:19:28.113     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ F == \- ]]
00:19:28.113     13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'F_O\f|dn>9rJThZC"1v679J4Xg{&/0ujt(0@jjU5'
00:19:28.113    13:45:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'F_O\f|dn>9rJThZC"1v679J4Xg{&/0ujt(0@jjU5' nqn.2016-06.io.spdk:cnode21344
00:19:28.371  [2024-12-14 13:45:27.979996] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21344: invalid model number 'F_O\f|dn>9rJThZC"1v679J4Xg{&/0ujt(0@jjU5'
00:19:28.371   13:45:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request:
00:19:28.371  {
00:19:28.372    "nqn": "nqn.2016-06.io.spdk:cnode21344",
00:19:28.372    "model_number": "F_\u007fO\\f|dn>9rJThZC\"1v679J4Xg{&/0ujt(0@jjU5",
00:19:28.372    "method": "nvmf_create_subsystem",
00:19:28.372    "req_id": 1
00:19:28.372  }
00:19:28.372  Got JSON-RPC error response
00:19:28.372  response:
00:19:28.372  {
00:19:28.372    "code": -32602,
00:19:28.372    "message": "Invalid MN F_\u007fO\\f|dn>9rJThZC\"1v679J4Xg{&/0ujt(0@jjU5"
00:19:28.372  }'
00:19:28.372   13:45:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request:
00:19:28.372  {
00:19:28.372    "nqn": "nqn.2016-06.io.spdk:cnode21344",
00:19:28.372    "model_number": "F_\u007fO\\f|dn>9rJThZC\"1v679J4Xg{&/0ujt(0@jjU5",
00:19:28.372    "method": "nvmf_create_subsystem",
00:19:28.372    "req_id": 1
00:19:28.372  }
00:19:28.372  Got JSON-RPC error response
00:19:28.372  response:
00:19:28.372  {
00:19:28.372    "code": -32602,
00:19:28.372    "message": "Invalid MN F_\u007fO\\f|dn>9rJThZC\"1v679J4Xg{&/0ujt(0@jjU5"
00:19:28.372  } == *\I\n\v\a\l\i\d\ \M\N* ]]
00:19:28.372   13:45:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype rdma
00:19:28.629  [2024-12-14 13:45:28.225681] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028540/0x7f96f9b01940) succeed.
00:19:28.629  [2024-12-14 13:45:28.235677] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000286c0/0x7f96f99ba940) succeed.
00:19:28.887   13:45:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a
00:19:29.144   13:45:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ rdma == \T\C\P ]]
00:19:29.144    13:45:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '192.168.100.8
00:19:29.144  192.168.100.9'
00:19:29.144    13:45:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1
00:19:29.144   13:45:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP=192.168.100.8
00:19:29.145    13:45:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t rdma -a 192.168.100.8 -s 4421
00:19:29.402  [2024-12-14 13:45:28.893774] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2
00:19:29.402   13:45:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request:
00:19:29.402  {
00:19:29.402    "nqn": "nqn.2016-06.io.spdk:cnode",
00:19:29.402    "listen_address": {
00:19:29.402      "trtype": "rdma",
00:19:29.402      "traddr": "192.168.100.8",
00:19:29.402      "trsvcid": "4421"
00:19:29.402    },
00:19:29.402    "method": "nvmf_subsystem_remove_listener",
00:19:29.402    "req_id": 1
00:19:29.402  }
00:19:29.402  Got JSON-RPC error response
00:19:29.402  response:
00:19:29.402  {
00:19:29.402    "code": -32602,
00:19:29.402    "message": "Invalid parameters"
00:19:29.402  }'
00:19:29.402   13:45:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request:
00:19:29.402  {
00:19:29.402    "nqn": "nqn.2016-06.io.spdk:cnode",
00:19:29.402    "listen_address": {
00:19:29.402      "trtype": "rdma",
00:19:29.402      "traddr": "192.168.100.8",
00:19:29.402      "trsvcid": "4421"
00:19:29.402    },
00:19:29.402    "method": "nvmf_subsystem_remove_listener",
00:19:29.402    "req_id": 1
00:19:29.402  }
00:19:29.402  Got JSON-RPC error response
00:19:29.402  response:
00:19:29.402  {
00:19:29.402    "code": -32602,
00:19:29.402    "message": "Invalid parameters"
00:19:29.402  } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]]
00:19:29.402    13:45:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19441 -i 0
00:19:29.402  [2024-12-14 13:45:29.098501] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19441: invalid cntlid range [0-65519]
00:19:29.402   13:45:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request:
00:19:29.402  {
00:19:29.402    "nqn": "nqn.2016-06.io.spdk:cnode19441",
00:19:29.402    "min_cntlid": 0,
00:19:29.402    "method": "nvmf_create_subsystem",
00:19:29.402    "req_id": 1
00:19:29.402  }
00:19:29.402  Got JSON-RPC error response
00:19:29.402  response:
00:19:29.402  {
00:19:29.402    "code": -32602,
00:19:29.402    "message": "Invalid cntlid range [0-65519]"
00:19:29.402  }'
00:19:29.402   13:45:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request:
00:19:29.402  {
00:19:29.403    "nqn": "nqn.2016-06.io.spdk:cnode19441",
00:19:29.403    "min_cntlid": 0,
00:19:29.403    "method": "nvmf_create_subsystem",
00:19:29.403    "req_id": 1
00:19:29.403  }
00:19:29.403  Got JSON-RPC error response
00:19:29.403  response:
00:19:29.403  {
00:19:29.403    "code": -32602,
00:19:29.403    "message": "Invalid cntlid range [0-65519]"
00:19:29.403  } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]]
00:19:29.403    13:45:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10323 -i 65520
00:19:29.660  [2024-12-14 13:45:29.299298] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10323: invalid cntlid range [65520-65519]
00:19:29.660   13:45:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request:
00:19:29.660  {
00:19:29.660    "nqn": "nqn.2016-06.io.spdk:cnode10323",
00:19:29.660    "min_cntlid": 65520,
00:19:29.660    "method": "nvmf_create_subsystem",
00:19:29.660    "req_id": 1
00:19:29.660  }
00:19:29.660  Got JSON-RPC error response
00:19:29.660  response:
00:19:29.660  {
00:19:29.660    "code": -32602,
00:19:29.660    "message": "Invalid cntlid range [65520-65519]"
00:19:29.660  }'
00:19:29.660   13:45:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request:
00:19:29.660  {
00:19:29.660    "nqn": "nqn.2016-06.io.spdk:cnode10323",
00:19:29.660    "min_cntlid": 65520,
00:19:29.660    "method": "nvmf_create_subsystem",
00:19:29.660    "req_id": 1
00:19:29.660  }
00:19:29.660  Got JSON-RPC error response
00:19:29.660  response:
00:19:29.660  {
00:19:29.660    "code": -32602,
00:19:29.660    "message": "Invalid cntlid range [65520-65519]"
00:19:29.660  } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]]
00:19:29.660    13:45:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23629 -I 0
00:19:29.918  [2024-12-14 13:45:29.512137] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23629: invalid cntlid range [1-0]
00:19:29.918   13:45:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request:
00:19:29.918  {
00:19:29.918    "nqn": "nqn.2016-06.io.spdk:cnode23629",
00:19:29.918    "max_cntlid": 0,
00:19:29.918    "method": "nvmf_create_subsystem",
00:19:29.918    "req_id": 1
00:19:29.918  }
00:19:29.918  Got JSON-RPC error response
00:19:29.918  response:
00:19:29.918  {
00:19:29.918    "code": -32602,
00:19:29.918    "message": "Invalid cntlid range [1-0]"
00:19:29.918  }'
00:19:29.918   13:45:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request:
00:19:29.918  {
00:19:29.918    "nqn": "nqn.2016-06.io.spdk:cnode23629",
00:19:29.918    "max_cntlid": 0,
00:19:29.918    "method": "nvmf_create_subsystem",
00:19:29.918    "req_id": 1
00:19:29.918  }
00:19:29.918  Got JSON-RPC error response
00:19:29.918  response:
00:19:29.918  {
00:19:29.918    "code": -32602,
00:19:29.918    "message": "Invalid cntlid range [1-0]"
00:19:29.918  } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]]
00:19:29.918    13:45:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9670 -I 65520
00:19:30.176  [2024-12-14 13:45:29.724950] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9670: invalid cntlid range [1-65520]
00:19:30.176   13:45:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request:
00:19:30.176  {
00:19:30.176    "nqn": "nqn.2016-06.io.spdk:cnode9670",
00:19:30.176    "max_cntlid": 65520,
00:19:30.176    "method": "nvmf_create_subsystem",
00:19:30.176    "req_id": 1
00:19:30.176  }
00:19:30.176  Got JSON-RPC error response
00:19:30.176  response:
00:19:30.176  {
00:19:30.176    "code": -32602,
00:19:30.176    "message": "Invalid cntlid range [1-65520]"
00:19:30.176  }'
00:19:30.176   13:45:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request:
00:19:30.176  {
00:19:30.176    "nqn": "nqn.2016-06.io.spdk:cnode9670",
00:19:30.176    "max_cntlid": 65520,
00:19:30.176    "method": "nvmf_create_subsystem",
00:19:30.176    "req_id": 1
00:19:30.176  }
00:19:30.176  Got JSON-RPC error response
00:19:30.176  response:
00:19:30.176  {
00:19:30.176    "code": -32602,
00:19:30.176    "message": "Invalid cntlid range [1-65520]"
00:19:30.176  } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]]
00:19:30.176    13:45:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27778 -i 6 -I 5
00:19:30.434  [2024-12-14 13:45:29.921717] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27778: invalid cntlid range [6-5]
00:19:30.434   13:45:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request:
00:19:30.434  {
00:19:30.434    "nqn": "nqn.2016-06.io.spdk:cnode27778",
00:19:30.434    "min_cntlid": 6,
00:19:30.434    "max_cntlid": 5,
00:19:30.434    "method": "nvmf_create_subsystem",
00:19:30.434    "req_id": 1
00:19:30.434  }
00:19:30.434  Got JSON-RPC error response
00:19:30.434  response:
00:19:30.434  {
00:19:30.434    "code": -32602,
00:19:30.434    "message": "Invalid cntlid range [6-5]"
00:19:30.434  }'
00:19:30.434   13:45:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request:
00:19:30.434  {
00:19:30.434    "nqn": "nqn.2016-06.io.spdk:cnode27778",
00:19:30.434    "min_cntlid": 6,
00:19:30.434    "max_cntlid": 5,
00:19:30.434    "method": "nvmf_create_subsystem",
00:19:30.434    "req_id": 1
00:19:30.434  }
00:19:30.434  Got JSON-RPC error response
00:19:30.434  response:
00:19:30.434  {
00:19:30.434    "code": -32602,
00:19:30.434    "message": "Invalid cntlid range [6-5]"
00:19:30.434  } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]]
00:19:30.434    13:45:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar
00:19:30.434   13:45:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request:
00:19:30.434  {
00:19:30.434    "name": "foobar",
00:19:30.434    "method": "nvmf_delete_target",
00:19:30.434    "req_id": 1
00:19:30.434  }
00:19:30.434  Got JSON-RPC error response
00:19:30.434  response:
00:19:30.434  {
00:19:30.434    "code": -32602,
00:19:30.434    "message": "The specified target doesn'\''t exist, cannot delete it."
00:19:30.434  }'
00:19:30.434   13:45:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request:
00:19:30.434  {
00:19:30.434    "name": "foobar",
00:19:30.434    "method": "nvmf_delete_target",
00:19:30.434    "req_id": 1
00:19:30.434  }
00:19:30.434  Got JSON-RPC error response
00:19:30.434  response:
00:19:30.434  {
00:19:30.434    "code": -32602,
00:19:30.434    "message": "The specified target doesn't exist, cannot delete it."
00:19:30.434  } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]]
00:19:30.434   13:45:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT
00:19:30.434   13:45:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini
00:19:30.434   13:45:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup
00:19:30.434   13:45:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync
00:19:30.434   13:45:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' rdma == tcp ']'
00:19:30.434   13:45:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' rdma == rdma ']'
00:19:30.434   13:45:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e
00:19:30.434   13:45:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20}
00:19:30.434   13:45:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma
00:19:30.434  rmmod nvme_rdma
00:19:30.434  rmmod nvme_fabrics
00:19:30.434   13:45:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:19:30.434   13:45:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e
00:19:30.434   13:45:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0
00:19:30.434   13:45:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 3318844 ']'
00:19:30.434   13:45:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 3318844
00:19:30.434   13:45:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 3318844 ']'
00:19:30.434   13:45:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 3318844
00:19:30.434    13:45:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname
00:19:30.434   13:45:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:19:30.434    13:45:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3318844
00:19:30.693   13:45:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:19:30.693   13:45:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:19:30.693   13:45:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3318844'
00:19:30.693  killing process with pid 3318844
00:19:30.693   13:45:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 3318844
00:19:30.693   13:45:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 3318844
00:19:32.636   13:45:31 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:19:32.636   13:45:31 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]]
00:19:32.636  
00:19:32.636  real	0m13.369s
00:19:32.636  user	0m26.834s
00:19:32.636  sys	0m6.615s
00:19:32.636   13:45:31 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable
00:19:32.636   13:45:31 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x
00:19:32.636  ************************************
00:19:32.636  END TEST nvmf_invalid
00:19:32.636  ************************************
00:19:32.636   13:45:31 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma
00:19:32.636   13:45:31 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:19:32.637   13:45:31 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:19:32.637   13:45:31 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:19:32.637  ************************************
00:19:32.637  START TEST nvmf_connect_stress
00:19:32.637  ************************************
00:19:32.637   13:45:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma
00:19:32.637  * Looking for test storage...
00:19:32.637  * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target
00:19:32.637    13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:19:32.637     13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version
00:19:32.637     13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:19:32.637    13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:19:32.637    13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:19:32.637    13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l
00:19:32.637    13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l
00:19:32.637    13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-:
00:19:32.637    13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1
00:19:32.637    13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-:
00:19:32.637    13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2
00:19:32.637    13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<'
00:19:32.637    13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2
00:19:32.637    13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1
00:19:32.637    13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:19:32.637    13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in
00:19:32.637    13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1
00:19:32.637    13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 ))
00:19:32.637    13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:19:32.637     13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1
00:19:32.637     13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1
00:19:32.637     13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:19:32.637     13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1
00:19:32.637    13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1
00:19:32.637     13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2
00:19:32.637     13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2
00:19:32.637     13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:19:32.637     13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2
00:19:32.637    13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2
00:19:32.637    13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:19:32.637    13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:19:32.637    13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0
00:19:32.637    13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:19:32.637    13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:19:32.637  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:32.637  		--rc genhtml_branch_coverage=1
00:19:32.637  		--rc genhtml_function_coverage=1
00:19:32.637  		--rc genhtml_legend=1
00:19:32.637  		--rc geninfo_all_blocks=1
00:19:32.637  		--rc geninfo_unexecuted_blocks=1
00:19:32.637  		
00:19:32.637  		'
00:19:32.637    13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:19:32.637  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:32.637  		--rc genhtml_branch_coverage=1
00:19:32.637  		--rc genhtml_function_coverage=1
00:19:32.637  		--rc genhtml_legend=1
00:19:32.637  		--rc geninfo_all_blocks=1
00:19:32.637  		--rc geninfo_unexecuted_blocks=1
00:19:32.637  		
00:19:32.637  		'
00:19:32.637    13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:19:32.637  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:32.637  		--rc genhtml_branch_coverage=1
00:19:32.637  		--rc genhtml_function_coverage=1
00:19:32.637  		--rc genhtml_legend=1
00:19:32.637  		--rc geninfo_all_blocks=1
00:19:32.637  		--rc geninfo_unexecuted_blocks=1
00:19:32.637  		
00:19:32.637  		'
00:19:32.637    13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:19:32.637  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:32.637  		--rc genhtml_branch_coverage=1
00:19:32.637  		--rc genhtml_function_coverage=1
00:19:32.637  		--rc genhtml_legend=1
00:19:32.637  		--rc geninfo_all_blocks=1
00:19:32.637  		--rc geninfo_unexecuted_blocks=1
00:19:32.637  		
00:19:32.637  		'
00:19:32.637   13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh
00:19:32.637     13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s
00:19:32.637    13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:19:32.637    13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:19:32.637    13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:19:32.637    13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:19:32.637    13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:19:32.637    13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:19:32.637    13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:19:32.637    13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:19:32.637    13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:19:32.637     13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:19:32.637    13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:19:32.637    13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e
00:19:32.637    13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:19:32.637    13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:19:32.637    13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:19:32.637    13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:19:32.637    13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh
00:19:32.637     13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob
00:19:32.637     13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:19:32.637     13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:19:32.637     13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:19:32.637      13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:19:32.637      13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:19:32.637      13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:19:32.637      13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH
00:19:32.637      13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:19:32.637    13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0
00:19:32.637    13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:19:32.637    13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:19:32.637    13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:19:32.637    13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:19:32.638    13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:19:32.638    13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:19:32.638  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:19:32.638    13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:19:32.638    13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:19:32.638    13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0
00:19:32.638   13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit
00:19:32.638   13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z rdma ']'
00:19:32.638   13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:19:32.638   13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs
00:19:32.638   13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no
00:19:32.638   13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns
00:19:32.638   13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:19:32.638   13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:19:32.638    13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:19:32.638   13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:19:32.638   13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:19:32.638   13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable
00:19:32.638   13:45:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:19:39.244   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:19:39.244   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=()
00:19:39.244   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs
00:19:39.244   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=()
00:19:39.244   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:19:39.244   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=()
00:19:39.244   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers
00:19:39.244   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=()
00:19:39.244   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs
00:19:39.244   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=()
00:19:39.244   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810
00:19:39.244   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=()
00:19:39.244   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722
00:19:39.244   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=()
00:19:39.244   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx
00:19:39.244   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:19:39.244   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:19:39.244   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:19:39.244   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:19:39.244   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:19:39.244   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:19:39.244   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:19:39.244   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:19:39.244   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:19:39.245   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:19:39.245   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:19:39.245   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:19:39.245   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:19:39.245   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ rdma == rdma ]]
00:19:39.245   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}")
00:19:39.245   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}")
00:19:39.245   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]]
00:19:39.245   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}")
00:19:39.245   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:19:39.245   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:19:39.245   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)'
00:19:39.245  Found 0000:d9:00.0 (0x15b3 - 0x1015)
00:19:39.245   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:19:39.245   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:19:39.245   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:19:39.245   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:19:39.245   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:19:39.245   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:19:39.245   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:19:39.245   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)'
00:19:39.245  Found 0000:d9:00.1 (0x15b3 - 0x1015)
00:19:39.245   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:19:39.245   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:19:39.245   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:19:39.245   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:19:39.245   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:19:39.245   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:19:39.245   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:19:39.245   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]]
00:19:39.245   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:19:39.245   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:19:39.245   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:19:39.245   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:19:39.245   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:19:39.245   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0'
00:19:39.245  Found net devices under 0000:d9:00.0: mlx_0_0
00:19:39.245   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:19:39.245   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:19:39.245   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:19:39.245   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:19:39.245   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:19:39.245   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:19:39.245   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1'
00:19:39.245  Found net devices under 0000:d9:00.1: mlx_0_1
00:19:39.245   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:19:39.245   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:19:39.245   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes
00:19:39.245   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:19:39.245   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ rdma == tcp ]]
00:19:39.245   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@447 -- # [[ rdma == rdma ]]
00:19:39.245   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # rdma_device_init
00:19:39.245   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@529 -- # load_ib_rdma_modules
00:19:39.245    13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@62 -- # uname
00:19:39.245   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']'
00:19:39.245   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@66 -- # modprobe ib_cm
00:19:39.245   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@67 -- # modprobe ib_core
00:19:39.245   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@68 -- # modprobe ib_umad
00:19:39.245   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@69 -- # modprobe ib_uverbs
00:19:39.245   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@70 -- # modprobe iw_cm
00:19:39.245   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@71 -- # modprobe rdma_cm
00:19:39.245   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@72 -- # modprobe rdma_ucm
00:19:39.245   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@530 -- # allocate_nic_ips
00:19:39.245   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR ))
00:19:39.245    13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # get_rdma_if_list
00:19:39.245    13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:19:39.245    13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:19:39.245     13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:19:39.245     13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:19:39.245    13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:19:39.245    13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:19:39.245    13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:19:39.245    13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:19:39.245    13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_0
00:19:39.245    13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2
00:19:39.245    13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:19:39.245    13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:19:39.245    13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:19:39.245    13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:19:39.245    13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:19:39.245    13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_1
00:19:39.245    13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2
00:19:39.245   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:19:39.245    13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0
00:19:39.245    13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:19:39.245    13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:19:39.245    13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}'
00:19:39.245    13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1
00:19:39.245   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # ip=192.168.100.8
00:19:39.245   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]]
00:19:39.245   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_0
00:19:39.245  6: mlx_0_0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:19:39.245      link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff
00:19:39.245      altname enp217s0f0np0
00:19:39.245      altname ens818f0np0
00:19:39.245      inet 192.168.100.8/24 scope global mlx_0_0
00:19:39.245         valid_lft forever preferred_lft forever
00:19:39.245   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:19:39.245    13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1
00:19:39.245    13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:19:39.245    13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:19:39.245    13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}'
00:19:39.245    13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1
00:19:39.246   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # ip=192.168.100.9
00:19:39.246   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]]
00:19:39.246   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_1
00:19:39.246  7: mlx_0_1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:19:39.246      link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff
00:19:39.246      altname enp217s0f1np1
00:19:39.246      altname ens818f1np1
00:19:39.246      inet 192.168.100.9/24 scope global mlx_0_1
00:19:39.246         valid_lft forever preferred_lft forever
00:19:39.246   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0
00:19:39.246   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:19:39.246   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma'
00:19:39.246   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]]
00:19:39.246    13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # get_available_rdma_ips
00:19:39.246     13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # get_rdma_if_list
00:19:39.246     13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:19:39.246     13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:19:39.246      13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:19:39.246      13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:19:39.246     13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:19:39.246     13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:19:39.246     13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:19:39.246     13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:19:39.246     13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_0
00:19:39.246     13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2
00:19:39.246     13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:19:39.246     13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:19:39.246     13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:19:39.246     13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:19:39.246     13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:19:39.246     13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_1
00:19:39.246     13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2
00:19:39.246    13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:19:39.246    13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0
00:19:39.246    13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:19:39.246    13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:19:39.246    13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}'
00:19:39.246    13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1
00:19:39.246    13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:19:39.246    13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1
00:19:39.246    13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:19:39.246    13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:19:39.246    13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}'
00:19:39.246    13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1
00:19:39.246   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8
00:19:39.246  192.168.100.9'
00:19:39.246    13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # echo '192.168.100.8
00:19:39.246  192.168.100.9'
00:19:39.246    13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # head -n 1
00:19:39.246   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8
00:19:39.246    13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # tail -n +2
00:19:39.246    13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # head -n 1
00:19:39.246    13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # echo '192.168.100.8
00:19:39.246  192.168.100.9'
00:19:39.246   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9
00:19:39.246   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']'
00:19:39.246   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024'
00:19:39.246   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' rdma == tcp ']'
00:19:39.246   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' rdma == rdma ']'
00:19:39.246   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-rdma
00:19:39.246   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE
00:19:39.246   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:19:39.246   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable
00:19:39.246   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:19:39.246   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=3323259
00:19:39.246   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 3323259
00:19:39.246   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE
00:19:39.246   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 3323259 ']'
00:19:39.246   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:19:39.246   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100
00:19:39.246   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:19:39.246  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:19:39.246   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable
00:19:39.246   13:45:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:19:39.246  [2024-12-14 13:45:38.740815] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:19:39.246  [2024-12-14 13:45:38.740920] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:19:39.246  [2024-12-14 13:45:38.874187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:19:39.246  [2024-12-14 13:45:38.972347] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:19:39.246  [2024-12-14 13:45:38.972410] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:19:39.246  [2024-12-14 13:45:38.972422] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:19:39.246  [2024-12-14 13:45:38.972435] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:19:39.246  [2024-12-14 13:45:38.972445] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:19:39.246  [2024-12-14 13:45:38.974723] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:19:39.246  [2024-12-14 13:45:38.974785] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:19:39.246  [2024-12-14 13:45:38.974797] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:19:39.813   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:19:39.813   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0
00:19:39.813   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:19:39.813   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable
00:19:39.813   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:19:40.071   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:19:40.071   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192
00:19:40.071   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:40.071   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:19:40.071  [2024-12-14 13:45:39.618444] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028540/0x7f7cd8fbd940) succeed.
00:19:40.071  [2024-12-14 13:45:39.627936] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000286c0/0x7f7cd8f79940) succeed.
00:19:40.330   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:40.330   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10
00:19:40.330   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:40.330   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:19:40.330   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:40.330   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420
00:19:40.330   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:40.330   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:19:40.330  [2024-12-14 13:45:39.844957] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 ***
00:19:40.330   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:40.330   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512
00:19:40.330   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:40.330   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:19:40.330  NULL1
00:19:40.330   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:40.330   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3323540
00:19:40.330   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10
00:19:40.330   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt
00:19:40.330   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt
00:19:40.331    13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20
00:19:40.331   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:19:40.331   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:19:40.331   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:19:40.331   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:19:40.331   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:19:40.331   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:19:40.331   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:19:40.331   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:19:40.331   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:19:40.331   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:19:40.331   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:19:40.331   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:19:40.331   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:19:40.331   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:19:40.331   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:19:40.331   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:19:40.331   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:19:40.331   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:19:40.331   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:19:40.331   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:19:40.331   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:19:40.331   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:19:40.331   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:19:40.331   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:19:40.331   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:19:40.331   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:19:40.331   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:19:40.331   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:19:40.331   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:19:40.331   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:19:40.331   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:19:40.331   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:19:40.331   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:19:40.331   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:19:40.331   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:19:40.331   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:19:40.331   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:19:40.331   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:19:40.331   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:19:40.331   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:19:40.331   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3323540
00:19:40.331   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:19:40.331   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:40.331   13:45:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:19:40.897   13:45:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:40.897   13:45:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3323540
00:19:40.897   13:45:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:19:40.897   13:45:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:40.897   13:45:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:19:41.156   13:45:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:41.156   13:45:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3323540
00:19:41.156   13:45:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:19:41.156   13:45:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:41.156   13:45:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:19:41.414   13:45:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:41.414   13:45:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3323540
00:19:41.414   13:45:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:19:41.414   13:45:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:41.414   13:45:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:19:41.672   13:45:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:41.672   13:45:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3323540
00:19:41.672   13:45:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:19:41.672   13:45:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:41.672   13:45:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:19:42.239   13:45:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:42.239   13:45:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3323540
00:19:42.239   13:45:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:19:42.239   13:45:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:42.239   13:45:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:19:42.497   13:45:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:42.497   13:45:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3323540
00:19:42.497   13:45:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:19:42.497   13:45:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:42.497   13:45:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:19:42.755   13:45:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:42.755   13:45:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3323540
00:19:42.755   13:45:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:19:42.755   13:45:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:42.755   13:45:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:19:43.322   13:45:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:43.322   13:45:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3323540
00:19:43.322   13:45:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:19:43.322   13:45:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:43.322   13:45:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:19:43.581   13:45:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:43.581   13:45:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3323540
00:19:43.581   13:45:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:19:43.581   13:45:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:43.581   13:45:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:19:43.838   13:45:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:43.838   13:45:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3323540
00:19:44.096   13:45:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:19:44.096   13:45:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:44.096   13:45:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:19:44.354   13:45:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:44.354   13:45:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3323540
00:19:44.354   13:45:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:19:44.354   13:45:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:44.354   13:45:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:19:44.612   13:45:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:44.612   13:45:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3323540
00:19:44.612   13:45:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:19:44.612   13:45:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:44.612   13:45:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:19:45.179   13:45:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:45.179   13:45:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3323540
00:19:45.179   13:45:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:19:45.179   13:45:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:45.179   13:45:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:19:45.437   13:45:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:45.437   13:45:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3323540
00:19:45.437   13:45:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:19:45.437   13:45:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:45.437   13:45:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:19:45.695   13:45:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:45.695   13:45:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3323540
00:19:45.695   13:45:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:19:45.695   13:45:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:45.695   13:45:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:19:46.261   13:45:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:46.261   13:45:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3323540
00:19:46.261   13:45:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:19:46.261   13:45:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:46.261   13:45:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:19:46.519   13:45:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:46.519   13:45:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3323540
00:19:46.519   13:45:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:19:46.519   13:45:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:46.519   13:45:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:19:46.777   13:45:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:46.777   13:45:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3323540
00:19:46.777   13:45:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:19:46.777   13:45:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:46.777   13:45:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:19:47.343   13:45:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:47.343   13:45:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3323540
00:19:47.343   13:45:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:19:47.343   13:45:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:47.343   13:45:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:19:47.601   13:45:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:47.601   13:45:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3323540
00:19:47.601   13:45:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:19:47.601   13:45:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:47.601   13:45:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:19:47.859   13:45:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:47.859   13:45:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3323540
00:19:47.859   13:45:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:19:47.859   13:45:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:47.859   13:45:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:19:48.425   13:45:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:48.425   13:45:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3323540
00:19:48.425   13:45:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:19:48.425   13:45:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:48.426   13:45:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:19:48.683   13:45:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:48.683   13:45:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3323540
00:19:48.683   13:45:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:19:48.683   13:45:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:48.683   13:45:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:19:48.941   13:45:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:48.941   13:45:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3323540
00:19:48.941   13:45:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:19:48.941   13:45:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:48.941   13:45:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:19:49.507   13:45:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:49.507   13:45:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3323540
00:19:49.507   13:45:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:19:49.507   13:45:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:49.507   13:45:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:19:49.765   13:45:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:49.765   13:45:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3323540
00:19:49.765   13:45:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:19:49.765   13:45:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:49.765   13:45:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:19:50.023   13:45:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:50.023   13:45:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3323540
00:19:50.023   13:45:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:19:50.023   13:45:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:50.023   13:45:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:19:50.591   13:45:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:50.591   13:45:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3323540
00:19:50.591   13:45:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:19:50.591   13:45:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:50.591   13:45:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:19:50.591  Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1
00:19:50.850   13:45:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:50.850   13:45:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3323540
00:19:50.850  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3323540) - No such process
00:19:50.850   13:45:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3323540
00:19:50.850   13:45:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt
00:19:50.850   13:45:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT
00:19:50.850   13:45:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini
00:19:50.850   13:45:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup
00:19:50.850   13:45:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync
00:19:50.850   13:45:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' rdma == tcp ']'
00:19:50.850   13:45:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' rdma == rdma ']'
00:19:50.850   13:45:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e
00:19:50.850   13:45:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20}
00:19:50.850   13:45:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma
00:19:50.850  rmmod nvme_rdma
00:19:50.850  rmmod nvme_fabrics
00:19:50.850   13:45:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:19:50.850   13:45:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e
00:19:50.850   13:45:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0
00:19:50.850   13:45:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 3323259 ']'
00:19:50.850   13:45:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 3323259
00:19:50.850   13:45:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 3323259 ']'
00:19:50.850   13:45:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 3323259
00:19:50.850    13:45:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname
00:19:50.850   13:45:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:19:50.850    13:45:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3323259
00:19:50.850   13:45:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:19:50.850   13:45:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:19:50.850   13:45:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3323259'
00:19:50.850  killing process with pid 3323259
00:19:50.850   13:45:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 3323259
00:19:50.850   13:45:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 3323259
00:19:52.753   13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:19:52.753   13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]]
00:19:52.753  
00:19:52.753  real	0m20.175s
00:19:52.753  user	0m44.294s
00:19:52.753  sys	0m9.263s
00:19:52.753   13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable
00:19:52.753   13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:19:52.753  ************************************
00:19:52.753  END TEST nvmf_connect_stress
00:19:52.753  ************************************
00:19:52.753   13:45:52 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma
00:19:52.753   13:45:52 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:19:52.753   13:45:52 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:19:52.753   13:45:52 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:19:52.753  ************************************
00:19:52.753  START TEST nvmf_fused_ordering
00:19:52.753  ************************************
00:19:52.753   13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma
00:19:52.753  * Looking for test storage...
00:19:52.753  * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target
00:19:52.753    13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:19:52.753     13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version
00:19:52.753     13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:19:52.753    13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:19:52.753    13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:19:52.753    13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l
00:19:52.753    13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l
00:19:52.753    13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-:
00:19:52.753    13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1
00:19:52.753    13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-:
00:19:52.754    13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2
00:19:52.754    13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<'
00:19:52.754    13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2
00:19:52.754    13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1
00:19:52.754    13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:19:52.754    13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in
00:19:52.754    13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1
00:19:52.754    13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 ))
00:19:52.754    13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:19:52.754     13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1
00:19:52.754     13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1
00:19:52.754     13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:19:52.754     13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1
00:19:52.754    13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1
00:19:52.754     13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2
00:19:52.754     13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2
00:19:52.754     13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:19:52.754     13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2
00:19:52.754    13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2
00:19:52.754    13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:19:52.754    13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:19:52.754    13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0
00:19:52.754    13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:19:52.754    13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:19:52.754  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:52.754  		--rc genhtml_branch_coverage=1
00:19:52.754  		--rc genhtml_function_coverage=1
00:19:52.754  		--rc genhtml_legend=1
00:19:52.754  		--rc geninfo_all_blocks=1
00:19:52.754  		--rc geninfo_unexecuted_blocks=1
00:19:52.754  		
00:19:52.754  		'
00:19:52.754    13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:19:52.754  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:52.754  		--rc genhtml_branch_coverage=1
00:19:52.754  		--rc genhtml_function_coverage=1
00:19:52.754  		--rc genhtml_legend=1
00:19:52.754  		--rc geninfo_all_blocks=1
00:19:52.754  		--rc geninfo_unexecuted_blocks=1
00:19:52.754  		
00:19:52.754  		'
00:19:52.754    13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:19:52.754  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:52.754  		--rc genhtml_branch_coverage=1
00:19:52.754  		--rc genhtml_function_coverage=1
00:19:52.754  		--rc genhtml_legend=1
00:19:52.754  		--rc geninfo_all_blocks=1
00:19:52.754  		--rc geninfo_unexecuted_blocks=1
00:19:52.754  		
00:19:52.754  		'
00:19:52.754    13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:19:52.754  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:52.754  		--rc genhtml_branch_coverage=1
00:19:52.754  		--rc genhtml_function_coverage=1
00:19:52.754  		--rc genhtml_legend=1
00:19:52.754  		--rc geninfo_all_blocks=1
00:19:52.754  		--rc geninfo_unexecuted_blocks=1
00:19:52.754  		
00:19:52.754  		'
00:19:52.754   13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh
00:19:52.754     13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s
00:19:52.754    13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:19:52.754    13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:19:52.754    13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:19:52.754    13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:19:52.754    13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:19:52.754    13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:19:52.754    13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:19:52.754    13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:19:52.754    13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:19:52.754     13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:19:52.754    13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:19:52.754    13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e
00:19:52.754    13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:19:52.754    13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:19:52.754    13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:19:52.754    13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:19:52.754    13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh
00:19:52.754     13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob
00:19:52.754     13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:19:52.754     13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:19:52.754     13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:19:52.754      13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:19:52.754      13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:19:52.754      13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:19:52.754      13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH
00:19:52.754      13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:19:52.754    13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0
00:19:52.754    13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:19:52.754    13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:19:52.754    13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:19:52.754    13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:19:52.754    13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:19:52.754    13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:19:52.754  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:19:52.754    13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:19:52.754    13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:19:52.754    13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0
00:19:52.754   13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit
00:19:52.754   13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z rdma ']'
00:19:52.754   13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:19:52.754   13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs
00:19:52.754   13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no
00:19:52.754   13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns
00:19:52.754   13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:19:52.754   13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:19:52.754    13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:19:52.754   13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:19:52.754   13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:19:52.754   13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable
00:19:52.754   13:45:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x
00:19:59.319   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:19:59.319   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=()
00:19:59.319   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs
00:19:59.319   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=()
00:19:59.319   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:19:59.319   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=()
00:19:59.319   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers
00:19:59.319   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=()
00:19:59.319   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs
00:19:59.319   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=()
00:19:59.319   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810
00:19:59.319   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=()
00:19:59.319   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722
00:19:59.319   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=()
00:19:59.319   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx
00:19:59.319   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:19:59.319   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:19:59.319   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:19:59.319   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:19:59.319   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:19:59.319   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:19:59.319   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:19:59.319   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:19:59.319   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:19:59.319   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:19:59.319   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:19:59.319   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:19:59.319   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:19:59.319   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ rdma == rdma ]]
00:19:59.319   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}")
00:19:59.319   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}")
00:19:59.319   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]]
00:19:59.319   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}")
00:19:59.319   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:19:59.319   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:19:59.319   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)'
00:19:59.319  Found 0000:d9:00.0 (0x15b3 - 0x1015)
00:19:59.319   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:19:59.319   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:19:59.319   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:19:59.319   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:19:59.319   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:19:59.319   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:19:59.319   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:19:59.319   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)'
00:19:59.319  Found 0000:d9:00.1 (0x15b3 - 0x1015)
00:19:59.319   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:19:59.319   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:19:59.319   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:19:59.319   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:19:59.319   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:19:59.319   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:19:59.319   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:19:59.319   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]]
00:19:59.319   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:19:59.319   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:19:59.319   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:19:59.320   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:19:59.320   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:19:59.320   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0'
00:19:59.320  Found net devices under 0000:d9:00.0: mlx_0_0
00:19:59.320   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:19:59.320   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:19:59.320   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:19:59.320   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:19:59.320   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:19:59.320   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:19:59.320   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1'
00:19:59.320  Found net devices under 0000:d9:00.1: mlx_0_1
00:19:59.320   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:19:59.320   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:19:59.320   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes
00:19:59.320   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:19:59.320   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ rdma == tcp ]]
00:19:59.320   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@447 -- # [[ rdma == rdma ]]
00:19:59.320   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # rdma_device_init
00:19:59.320   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@529 -- # load_ib_rdma_modules
00:19:59.320    13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@62 -- # uname
00:19:59.320   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']'
00:19:59.320   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@66 -- # modprobe ib_cm
00:19:59.320   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@67 -- # modprobe ib_core
00:19:59.320   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@68 -- # modprobe ib_umad
00:19:59.320   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@69 -- # modprobe ib_uverbs
00:19:59.320   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@70 -- # modprobe iw_cm
00:19:59.320   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@71 -- # modprobe rdma_cm
00:19:59.320   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@72 -- # modprobe rdma_ucm
00:19:59.320   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@530 -- # allocate_nic_ips
00:19:59.320   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR ))
00:19:59.320    13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # get_rdma_if_list
00:19:59.320    13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:19:59.320    13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:19:59.320     13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:19:59.320     13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:19:59.320    13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:19:59.320    13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:19:59.320    13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:19:59.320    13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:19:59.320    13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_0
00:19:59.320    13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2
00:19:59.320    13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:19:59.320    13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:19:59.320    13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:19:59.320    13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:19:59.320    13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:19:59.320    13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_1
00:19:59.320    13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2
00:19:59.320   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:19:59.320    13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0
00:19:59.320    13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:19:59.320    13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:19:59.320    13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}'
00:19:59.320    13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1
00:19:59.320   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # ip=192.168.100.8
00:19:59.320   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]]
00:19:59.320   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@85 -- # ip addr show mlx_0_0
00:19:59.320  6: mlx_0_0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:19:59.320      link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff
00:19:59.320      altname enp217s0f0np0
00:19:59.320      altname ens818f0np0
00:19:59.320      inet 192.168.100.8/24 scope global mlx_0_0
00:19:59.320         valid_lft forever preferred_lft forever
00:19:59.320   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:19:59.320    13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1
00:19:59.320    13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:19:59.320    13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:19:59.320    13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}'
00:19:59.320    13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1
00:19:59.320   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # ip=192.168.100.9
00:19:59.320   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]]
00:19:59.320   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@85 -- # ip addr show mlx_0_1
00:19:59.320  7: mlx_0_1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:19:59.320      link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff
00:19:59.320      altname enp217s0f1np1
00:19:59.320      altname ens818f1np1
00:19:59.320      inet 192.168.100.9/24 scope global mlx_0_1
00:19:59.320         valid_lft forever preferred_lft forever
00:19:59.320   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0
00:19:59.320   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:19:59.320   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma'
00:19:59.320   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]]
00:19:59.320    13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # get_available_rdma_ips
00:19:59.320     13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # get_rdma_if_list
00:19:59.320     13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:19:59.320     13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:19:59.320      13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:19:59.320      13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:19:59.320     13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:19:59.320     13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:19:59.320     13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:19:59.320     13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:19:59.320     13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_0
00:19:59.320     13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2
00:19:59.320     13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:19:59.320     13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:19:59.320     13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:19:59.320     13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:19:59.320     13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:19:59.320     13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_1
00:19:59.320     13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2
00:19:59.320    13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:19:59.320    13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0
00:19:59.320    13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:19:59.320    13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:19:59.320    13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}'
00:19:59.320    13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1
00:19:59.320    13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:19:59.320    13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1
00:19:59.320    13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:19:59.320    13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:19:59.320    13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}'
00:19:59.320    13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1
00:19:59.321   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8
00:19:59.321  192.168.100.9'
00:19:59.321    13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # echo '192.168.100.8
00:19:59.321  192.168.100.9'
00:19:59.321    13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # head -n 1
00:19:59.321   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8
00:19:59.321    13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # echo '192.168.100.8
00:19:59.321  192.168.100.9'
00:19:59.321    13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # tail -n +2
00:19:59.321    13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # head -n 1
00:19:59.321   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9
00:19:59.321   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']'
00:19:59.321   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024'
00:19:59.321   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' rdma == tcp ']'
00:19:59.321   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' rdma == rdma ']'
00:19:59.321   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-rdma
00:19:59.321   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2
00:19:59.321   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:19:59.321   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable
00:19:59.321   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x
00:19:59.321   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=3328846
00:19:59.321   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2
00:19:59.321   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 3328846
00:19:59.321   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 3328846 ']'
00:19:59.321   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:19:59.321   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100
00:19:59.321   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:19:59.321  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:19:59.321   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable
00:19:59.321   13:45:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x
00:19:59.321  [2024-12-14 13:45:59.041399] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:19:59.321  [2024-12-14 13:45:59.041494] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:19:59.579  [2024-12-14 13:45:59.171913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:19:59.579  [2024-12-14 13:45:59.271524] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:19:59.579  [2024-12-14 13:45:59.271569] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:19:59.579  [2024-12-14 13:45:59.271583] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:19:59.579  [2024-12-14 13:45:59.271597] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:19:59.579  [2024-12-14 13:45:59.271607] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:19:59.579  [2024-12-14 13:45:59.273009] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:20:00.146   13:45:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:20:00.146   13:45:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0
00:20:00.146   13:45:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:20:00.146   13:45:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable
00:20:00.146   13:45:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x
00:20:00.146   13:45:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:20:00.146   13:45:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192
00:20:00.146   13:45:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:00.146   13:45:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x
00:20:00.404  [2024-12-14 13:45:59.904104] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028540/0x7f15df325940) succeed.
00:20:00.404  [2024-12-14 13:45:59.913467] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000286c0/0x7f15df1bd940) succeed.
00:20:00.404   13:45:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:00.404   13:45:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10
00:20:00.404   13:45:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:00.404   13:45:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x
00:20:00.404   13:46:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:00.404   13:46:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420
00:20:00.404   13:46:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:00.404   13:46:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x
00:20:00.404  [2024-12-14 13:46:00.009488] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 ***
00:20:00.404   13:46:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:00.404   13:46:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512
00:20:00.404   13:46:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:00.404   13:46:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x
00:20:00.404  NULL1
00:20:00.404   13:46:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:00.404   13:46:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine
00:20:00.404   13:46:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:00.404   13:46:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x
00:20:00.404   13:46:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:00.404   13:46:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1
00:20:00.404   13:46:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:00.404   13:46:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x
00:20:00.405   13:46:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:00.405   13:46:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1'
00:20:00.405  [2024-12-14 13:46:00.092456] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:20:00.405  [2024-12-14 13:46:00.092520] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3329113 ]
00:20:00.663  Attached to nqn.2016-06.io.spdk:cnode1
00:20:00.663    Namespace ID: 1 size: 1GB
00:20:00.663  fused_ordering(0)
00:20:00.663  fused_ordering(1)
00:20:00.663  fused_ordering(2)
00:20:00.663  fused_ordering(3)
00:20:00.663  fused_ordering(4)
00:20:00.663  fused_ordering(5)
00:20:00.663  fused_ordering(6)
00:20:00.663  fused_ordering(7)
00:20:00.663  fused_ordering(8)
00:20:00.663  fused_ordering(9)
00:20:00.663  fused_ordering(10)
00:20:00.663  fused_ordering(11)
00:20:00.663  fused_ordering(12)
00:20:00.663  fused_ordering(13)
00:20:00.663  fused_ordering(14)
00:20:00.663  fused_ordering(15)
00:20:00.663  fused_ordering(16)
00:20:00.663  fused_ordering(17)
00:20:00.663  fused_ordering(18)
00:20:00.663  fused_ordering(19)
00:20:00.663  fused_ordering(20)
00:20:00.663  fused_ordering(21)
00:20:00.663  fused_ordering(22)
00:20:00.663  fused_ordering(23)
00:20:00.663  fused_ordering(24)
00:20:00.663  fused_ordering(25)
00:20:00.663  fused_ordering(26)
00:20:00.663  fused_ordering(27)
00:20:00.663  fused_ordering(28)
00:20:00.663  fused_ordering(29)
00:20:00.663  fused_ordering(30)
00:20:00.663  fused_ordering(31)
00:20:00.663  fused_ordering(32)
00:20:00.663  fused_ordering(33)
00:20:00.663  fused_ordering(34)
00:20:00.663  fused_ordering(35)
00:20:00.663  fused_ordering(36)
00:20:00.663  fused_ordering(37)
00:20:00.663  fused_ordering(38)
00:20:00.663  fused_ordering(39)
00:20:00.663  fused_ordering(40)
00:20:00.663  fused_ordering(41)
00:20:00.663  fused_ordering(42)
00:20:00.663  fused_ordering(43)
00:20:00.663  fused_ordering(44)
00:20:00.663  fused_ordering(45)
00:20:00.663  fused_ordering(46)
00:20:00.663  fused_ordering(47)
00:20:00.663  fused_ordering(48)
00:20:00.663  fused_ordering(49)
00:20:00.663  fused_ordering(50)
00:20:00.663  fused_ordering(51)
00:20:00.663  fused_ordering(52)
00:20:00.663  fused_ordering(53)
00:20:00.663  fused_ordering(54)
00:20:00.663  fused_ordering(55)
00:20:00.663  fused_ordering(56)
00:20:00.663  fused_ordering(57)
00:20:00.663  fused_ordering(58)
00:20:00.663  fused_ordering(59)
00:20:00.663  fused_ordering(60)
00:20:00.663  fused_ordering(61)
00:20:00.664  fused_ordering(62)
00:20:00.664  fused_ordering(63)
00:20:00.664  fused_ordering(64)
00:20:00.664  fused_ordering(65)
00:20:00.664  fused_ordering(66)
00:20:00.664  fused_ordering(67)
00:20:00.664  fused_ordering(68)
00:20:00.664  fused_ordering(69)
00:20:00.664  fused_ordering(70)
00:20:00.664  fused_ordering(71)
00:20:00.664  fused_ordering(72)
00:20:00.664  fused_ordering(73)
00:20:00.664  fused_ordering(74)
00:20:00.664  fused_ordering(75)
00:20:00.664  fused_ordering(76)
00:20:00.664  fused_ordering(77)
00:20:00.664  fused_ordering(78)
00:20:00.664  fused_ordering(79)
00:20:00.664  fused_ordering(80)
00:20:00.664  fused_ordering(81)
00:20:00.664  fused_ordering(82)
00:20:00.664  fused_ordering(83)
00:20:00.664  fused_ordering(84)
00:20:00.664  fused_ordering(85)
00:20:00.664  fused_ordering(86)
00:20:00.664  fused_ordering(87)
00:20:00.664  fused_ordering(88)
00:20:00.664  fused_ordering(89)
00:20:00.664  fused_ordering(90)
00:20:00.664  fused_ordering(91)
00:20:00.664  fused_ordering(92)
00:20:00.664  fused_ordering(93)
00:20:00.664  fused_ordering(94)
00:20:00.664  fused_ordering(95)
00:20:00.664  fused_ordering(96)
00:20:00.664  fused_ordering(97)
00:20:00.664  fused_ordering(98)
00:20:00.664  fused_ordering(99)
00:20:00.664  fused_ordering(100)
00:20:00.664  fused_ordering(101)
00:20:00.664  fused_ordering(102)
00:20:00.664  fused_ordering(103)
00:20:00.664  fused_ordering(104)
00:20:00.664  fused_ordering(105)
00:20:00.664  fused_ordering(106)
00:20:00.664  fused_ordering(107)
00:20:00.664  fused_ordering(108)
00:20:00.664  fused_ordering(109)
00:20:00.664  fused_ordering(110)
00:20:00.664  fused_ordering(111)
00:20:00.664  fused_ordering(112)
00:20:00.664  fused_ordering(113)
00:20:00.664  fused_ordering(114)
00:20:00.664  fused_ordering(115)
00:20:00.664  fused_ordering(116)
00:20:00.664  fused_ordering(117)
00:20:00.664  fused_ordering(118)
00:20:00.664  fused_ordering(119)
00:20:00.664  fused_ordering(120)
00:20:00.664  fused_ordering(121)
00:20:00.664  fused_ordering(122)
00:20:00.664  fused_ordering(123)
00:20:00.664  fused_ordering(124)
00:20:00.664  fused_ordering(125)
00:20:00.664  fused_ordering(126)
00:20:00.664  fused_ordering(127)
00:20:00.664  fused_ordering(128)
00:20:00.664  fused_ordering(129)
00:20:00.664  fused_ordering(130)
00:20:00.664  fused_ordering(131)
00:20:00.664  fused_ordering(132)
00:20:00.664  fused_ordering(133)
00:20:00.664  fused_ordering(134)
00:20:00.664  fused_ordering(135)
00:20:00.664  fused_ordering(136)
00:20:00.664  fused_ordering(137)
00:20:00.664  fused_ordering(138)
00:20:00.664  fused_ordering(139)
00:20:00.664  fused_ordering(140)
00:20:00.664  fused_ordering(141)
00:20:00.664  fused_ordering(142)
00:20:00.664  fused_ordering(143)
00:20:00.664  fused_ordering(144)
00:20:00.664  fused_ordering(145)
00:20:00.664  fused_ordering(146)
00:20:00.664  fused_ordering(147)
00:20:00.664  fused_ordering(148)
00:20:00.664  fused_ordering(149)
00:20:00.664  fused_ordering(150)
00:20:00.664  fused_ordering(151)
00:20:00.664  fused_ordering(152)
00:20:00.664  fused_ordering(153)
00:20:00.664  fused_ordering(154)
00:20:00.664  fused_ordering(155)
00:20:00.664  fused_ordering(156)
00:20:00.664  fused_ordering(157)
00:20:00.664  fused_ordering(158)
00:20:00.664  fused_ordering(159)
00:20:00.664  fused_ordering(160)
00:20:00.664  fused_ordering(161)
00:20:00.664  fused_ordering(162)
00:20:00.664  fused_ordering(163)
00:20:00.664  fused_ordering(164)
00:20:00.664  fused_ordering(165)
00:20:00.664  fused_ordering(166)
00:20:00.664  fused_ordering(167)
00:20:00.664  fused_ordering(168)
00:20:00.664  fused_ordering(169)
00:20:00.664  fused_ordering(170)
00:20:00.664  fused_ordering(171)
00:20:00.664  fused_ordering(172)
00:20:00.664  fused_ordering(173)
00:20:00.664  fused_ordering(174)
00:20:00.664  fused_ordering(175)
00:20:00.664  fused_ordering(176)
00:20:00.664  fused_ordering(177)
00:20:00.664  fused_ordering(178)
00:20:00.664  fused_ordering(179)
00:20:00.664  fused_ordering(180)
00:20:00.664  fused_ordering(181)
00:20:00.664  fused_ordering(182)
00:20:00.664  fused_ordering(183)
00:20:00.664  fused_ordering(184)
00:20:00.664  fused_ordering(185)
00:20:00.664  fused_ordering(186)
00:20:00.664  fused_ordering(187)
00:20:00.664  fused_ordering(188)
00:20:00.664  fused_ordering(189)
00:20:00.664  fused_ordering(190)
00:20:00.664  fused_ordering(191)
00:20:00.664  fused_ordering(192)
00:20:00.664  fused_ordering(193)
00:20:00.664  fused_ordering(194)
00:20:00.664  fused_ordering(195)
00:20:00.664  fused_ordering(196)
00:20:00.664  fused_ordering(197)
00:20:00.664  fused_ordering(198)
00:20:00.664  fused_ordering(199)
00:20:00.664  fused_ordering(200)
00:20:00.664  fused_ordering(201)
00:20:00.664  fused_ordering(202)
00:20:00.664  fused_ordering(203)
00:20:00.664  fused_ordering(204)
00:20:00.664  fused_ordering(205)
00:20:00.923  fused_ordering(206)
00:20:00.923  fused_ordering(207)
00:20:00.923  fused_ordering(208)
00:20:00.923  fused_ordering(209)
00:20:00.923  fused_ordering(210)
00:20:00.923  fused_ordering(211)
00:20:00.923  fused_ordering(212)
00:20:00.923  fused_ordering(213)
00:20:00.923  fused_ordering(214)
00:20:00.923  fused_ordering(215)
00:20:00.923  fused_ordering(216)
00:20:00.923  fused_ordering(217)
00:20:00.923  fused_ordering(218)
00:20:00.923  fused_ordering(219)
00:20:00.923  fused_ordering(220)
00:20:00.923  fused_ordering(221)
00:20:00.923  fused_ordering(222)
00:20:00.923  fused_ordering(223)
00:20:00.923  fused_ordering(224)
00:20:00.923  fused_ordering(225)
00:20:00.923  fused_ordering(226)
00:20:00.923  fused_ordering(227)
00:20:00.923  fused_ordering(228)
00:20:00.923  fused_ordering(229)
00:20:00.923  fused_ordering(230)
00:20:00.923  fused_ordering(231)
00:20:00.923  fused_ordering(232)
00:20:00.923  fused_ordering(233)
00:20:00.923  fused_ordering(234)
00:20:00.923  fused_ordering(235)
00:20:00.923  fused_ordering(236)
00:20:00.923  fused_ordering(237)
00:20:00.923  fused_ordering(238)
00:20:00.923  fused_ordering(239)
00:20:00.923  fused_ordering(240)
00:20:00.923  fused_ordering(241)
00:20:00.923  fused_ordering(242)
00:20:00.923  fused_ordering(243)
00:20:00.923  fused_ordering(244)
00:20:00.923  fused_ordering(245)
00:20:00.923  fused_ordering(246)
00:20:00.923  fused_ordering(247)
00:20:00.923  fused_ordering(248)
00:20:00.923  fused_ordering(249)
00:20:00.923  fused_ordering(250)
00:20:00.923  fused_ordering(251)
00:20:00.923  fused_ordering(252)
00:20:00.923  fused_ordering(253)
00:20:00.923  fused_ordering(254)
00:20:00.923  fused_ordering(255)
00:20:00.923  fused_ordering(256)
00:20:00.923  fused_ordering(257)
00:20:00.923  fused_ordering(258)
00:20:00.923  fused_ordering(259)
00:20:00.923  fused_ordering(260)
00:20:00.923  fused_ordering(261)
00:20:00.923  fused_ordering(262)
00:20:00.923  fused_ordering(263)
00:20:00.923  fused_ordering(264)
00:20:00.923  fused_ordering(265)
00:20:00.923  fused_ordering(266)
00:20:00.923  fused_ordering(267)
00:20:00.923  fused_ordering(268)
00:20:00.923  fused_ordering(269)
00:20:00.923  fused_ordering(270)
00:20:00.923  fused_ordering(271)
00:20:00.923  fused_ordering(272)
00:20:00.923  fused_ordering(273)
00:20:00.923  fused_ordering(274)
00:20:00.923  fused_ordering(275)
00:20:00.923  fused_ordering(276)
00:20:00.923  fused_ordering(277)
00:20:00.923  fused_ordering(278)
00:20:00.923  fused_ordering(279)
00:20:00.923  fused_ordering(280)
00:20:00.923  fused_ordering(281)
00:20:00.923  fused_ordering(282)
00:20:00.923  fused_ordering(283)
00:20:00.923  fused_ordering(284)
00:20:00.923  fused_ordering(285)
00:20:00.923  fused_ordering(286)
00:20:00.923  fused_ordering(287)
00:20:00.923  fused_ordering(288)
00:20:00.923  fused_ordering(289)
00:20:00.923  fused_ordering(290)
00:20:00.923  fused_ordering(291)
00:20:00.923  fused_ordering(292)
00:20:00.923  fused_ordering(293)
00:20:00.923  fused_ordering(294)
00:20:00.923  fused_ordering(295)
00:20:00.923  fused_ordering(296)
00:20:00.923  fused_ordering(297)
00:20:00.923  fused_ordering(298)
00:20:00.923  fused_ordering(299)
00:20:00.923  fused_ordering(300)
00:20:00.923  fused_ordering(301)
00:20:00.923  fused_ordering(302)
00:20:00.923  fused_ordering(303)
00:20:00.923  fused_ordering(304)
00:20:00.923  fused_ordering(305)
00:20:00.923  fused_ordering(306)
00:20:00.923  fused_ordering(307)
00:20:00.923  fused_ordering(308)
00:20:00.923  fused_ordering(309)
00:20:00.923  fused_ordering(310)
00:20:00.923  fused_ordering(311)
00:20:00.923  fused_ordering(312)
00:20:00.923  fused_ordering(313)
00:20:00.923  fused_ordering(314)
00:20:00.923  fused_ordering(315)
00:20:00.923  fused_ordering(316)
00:20:00.923  fused_ordering(317)
00:20:00.923  fused_ordering(318)
00:20:00.923  fused_ordering(319)
00:20:00.923  fused_ordering(320)
00:20:00.923  fused_ordering(321)
00:20:00.923  fused_ordering(322)
00:20:00.923  fused_ordering(323)
00:20:00.923  fused_ordering(324)
00:20:00.923  fused_ordering(325)
00:20:00.923  fused_ordering(326)
00:20:00.923  fused_ordering(327)
00:20:00.923  fused_ordering(328)
00:20:00.923  fused_ordering(329)
00:20:00.923  fused_ordering(330)
00:20:00.923  fused_ordering(331)
00:20:00.923  fused_ordering(332)
00:20:00.923  fused_ordering(333)
00:20:00.923  fused_ordering(334)
00:20:00.923  fused_ordering(335)
00:20:00.923  fused_ordering(336)
00:20:00.923  fused_ordering(337)
00:20:00.923  fused_ordering(338)
00:20:00.923  fused_ordering(339)
00:20:00.923  fused_ordering(340)
00:20:00.923  fused_ordering(341)
00:20:00.923  fused_ordering(342)
00:20:00.923  fused_ordering(343)
00:20:00.923  fused_ordering(344)
00:20:00.923  fused_ordering(345)
00:20:00.923  fused_ordering(346)
00:20:00.923  fused_ordering(347)
00:20:00.923  fused_ordering(348)
00:20:00.923  fused_ordering(349)
00:20:00.923  fused_ordering(350)
00:20:00.923  fused_ordering(351)
00:20:00.923  fused_ordering(352)
00:20:00.923  fused_ordering(353)
00:20:00.923  fused_ordering(354)
00:20:00.923  fused_ordering(355)
00:20:00.923  fused_ordering(356)
00:20:00.923  fused_ordering(357)
00:20:00.923  fused_ordering(358)
00:20:00.923  fused_ordering(359)
00:20:00.923  fused_ordering(360)
00:20:00.923  fused_ordering(361)
00:20:00.923  fused_ordering(362)
00:20:00.923  fused_ordering(363)
00:20:00.923  fused_ordering(364)
00:20:00.923  fused_ordering(365)
00:20:00.923  fused_ordering(366)
00:20:00.923  fused_ordering(367)
00:20:00.923  fused_ordering(368)
00:20:00.923  fused_ordering(369)
00:20:00.923  fused_ordering(370)
00:20:00.923  fused_ordering(371)
00:20:00.923  fused_ordering(372)
00:20:00.923  fused_ordering(373)
00:20:00.924  fused_ordering(374)
00:20:00.924  fused_ordering(375)
00:20:00.924  fused_ordering(376)
00:20:00.924  fused_ordering(377)
00:20:00.924  fused_ordering(378)
00:20:00.924  fused_ordering(379)
00:20:00.924  fused_ordering(380)
00:20:00.924  fused_ordering(381)
00:20:00.924  fused_ordering(382)
00:20:00.924  fused_ordering(383)
00:20:00.924  fused_ordering(384)
00:20:00.924  fused_ordering(385)
00:20:00.924  fused_ordering(386)
00:20:00.924  fused_ordering(387)
00:20:00.924  fused_ordering(388)
00:20:00.924  fused_ordering(389)
00:20:00.924  fused_ordering(390)
00:20:00.924  fused_ordering(391)
00:20:00.924  fused_ordering(392)
00:20:00.924  fused_ordering(393)
00:20:00.924  fused_ordering(394)
00:20:00.924  fused_ordering(395)
00:20:00.924  fused_ordering(396)
00:20:00.924  fused_ordering(397)
00:20:00.924  fused_ordering(398)
00:20:00.924  fused_ordering(399)
00:20:00.924  fused_ordering(400)
00:20:00.924  fused_ordering(401)
00:20:00.924  fused_ordering(402)
00:20:00.924  fused_ordering(403)
00:20:00.924  fused_ordering(404)
00:20:00.924  fused_ordering(405)
00:20:00.924  fused_ordering(406)
00:20:00.924  fused_ordering(407)
00:20:00.924  fused_ordering(408)
00:20:00.924  fused_ordering(409)
00:20:00.924  fused_ordering(410)
00:20:00.924  fused_ordering(411)
00:20:00.924  fused_ordering(412)
00:20:00.924  fused_ordering(413)
00:20:00.924  fused_ordering(414)
00:20:00.924  fused_ordering(415)
00:20:00.924  fused_ordering(416)
00:20:00.924  fused_ordering(417)
00:20:00.924  fused_ordering(418)
00:20:00.924  fused_ordering(419)
00:20:00.924  fused_ordering(420)
00:20:00.924  fused_ordering(421)
00:20:00.924  fused_ordering(422)
00:20:00.924  fused_ordering(423)
00:20:00.924  fused_ordering(424)
00:20:00.924  fused_ordering(425)
00:20:00.924  fused_ordering(426)
00:20:00.924  fused_ordering(427)
00:20:00.924  fused_ordering(428)
00:20:00.924  fused_ordering(429)
00:20:00.924  fused_ordering(430)
00:20:00.924  fused_ordering(431)
00:20:00.924  fused_ordering(432)
00:20:00.924  fused_ordering(433)
00:20:00.924  fused_ordering(434)
00:20:00.924  fused_ordering(435)
00:20:00.924  fused_ordering(436)
00:20:00.924  fused_ordering(437)
00:20:00.924  fused_ordering(438)
00:20:00.924  fused_ordering(439)
00:20:00.924  fused_ordering(440)
00:20:00.924  fused_ordering(441)
00:20:00.924  fused_ordering(442)
00:20:00.924  fused_ordering(443)
00:20:00.924  fused_ordering(444)
00:20:00.924  fused_ordering(445)
00:20:00.924  fused_ordering(446)
00:20:00.924  fused_ordering(447)
00:20:00.924  fused_ordering(448)
00:20:00.924  fused_ordering(449)
00:20:00.924  fused_ordering(450)
00:20:00.924  fused_ordering(451)
00:20:00.924  fused_ordering(452)
00:20:00.924  fused_ordering(453)
00:20:00.924  fused_ordering(454)
00:20:00.924  fused_ordering(455)
00:20:00.924  fused_ordering(456)
00:20:00.924  fused_ordering(457)
00:20:00.924  fused_ordering(458)
00:20:00.924  fused_ordering(459)
00:20:00.924  fused_ordering(460)
00:20:00.924  fused_ordering(461)
00:20:00.924  fused_ordering(462)
00:20:00.924  fused_ordering(463)
00:20:00.924  fused_ordering(464)
00:20:00.924  fused_ordering(465)
00:20:00.924  fused_ordering(466)
00:20:00.924  fused_ordering(467)
00:20:00.924  fused_ordering(468)
00:20:00.924  fused_ordering(469)
00:20:00.924  fused_ordering(470)
00:20:00.924  fused_ordering(471)
00:20:00.924  fused_ordering(472)
00:20:00.924  fused_ordering(473)
00:20:00.924  fused_ordering(474)
00:20:00.924  fused_ordering(475)
00:20:00.924  fused_ordering(476)
00:20:00.924  fused_ordering(477)
00:20:00.924  fused_ordering(478)
00:20:00.924  fused_ordering(479)
00:20:00.924  fused_ordering(480)
00:20:00.924  fused_ordering(481)
00:20:00.924  fused_ordering(482)
00:20:00.924  fused_ordering(483)
00:20:00.924  fused_ordering(484)
00:20:00.924  fused_ordering(485)
00:20:00.924  fused_ordering(486)
00:20:00.924  fused_ordering(487)
00:20:00.924  fused_ordering(488)
00:20:00.924  fused_ordering(489)
00:20:00.924  fused_ordering(490)
00:20:00.924  fused_ordering(491)
00:20:00.924  fused_ordering(492)
00:20:00.924  fused_ordering(493)
00:20:00.924  fused_ordering(494)
00:20:00.924  fused_ordering(495)
00:20:00.924  fused_ordering(496)
00:20:00.924  fused_ordering(497)
00:20:00.924  fused_ordering(498)
00:20:00.924  fused_ordering(499)
00:20:00.924  fused_ordering(500)
00:20:00.924  fused_ordering(501)
00:20:00.924  fused_ordering(502)
00:20:00.924  fused_ordering(503)
00:20:00.924  fused_ordering(504)
00:20:00.924  fused_ordering(505)
00:20:00.924  fused_ordering(506)
00:20:00.924  fused_ordering(507)
00:20:00.924  fused_ordering(508)
00:20:00.924  fused_ordering(509)
00:20:00.924  fused_ordering(510)
00:20:00.924  fused_ordering(511)
00:20:00.924  fused_ordering(512)
00:20:00.924  fused_ordering(513)
00:20:00.924  fused_ordering(514)
00:20:00.924  fused_ordering(515)
00:20:00.924  fused_ordering(516)
00:20:00.924  fused_ordering(517)
00:20:00.924  fused_ordering(518)
00:20:00.924  fused_ordering(519)
00:20:00.924  fused_ordering(520)
00:20:00.924  fused_ordering(521)
00:20:00.924  fused_ordering(522)
00:20:00.924  fused_ordering(523)
00:20:00.924  fused_ordering(524)
00:20:00.924  fused_ordering(525)
00:20:00.924  fused_ordering(526)
00:20:00.924  fused_ordering(527)
00:20:00.924  fused_ordering(528)
00:20:00.924  fused_ordering(529)
00:20:00.924  fused_ordering(530)
00:20:00.924  fused_ordering(531)
00:20:00.924  fused_ordering(532)
00:20:00.924  fused_ordering(533)
00:20:00.924  fused_ordering(534)
00:20:00.924  fused_ordering(535)
00:20:00.924  fused_ordering(536)
00:20:00.924  fused_ordering(537)
00:20:00.924  fused_ordering(538)
00:20:00.924  fused_ordering(539)
00:20:00.924  fused_ordering(540)
00:20:00.924  fused_ordering(541)
00:20:00.924  fused_ordering(542)
00:20:00.924  fused_ordering(543)
00:20:00.924  fused_ordering(544)
00:20:00.924  fused_ordering(545)
00:20:00.924  fused_ordering(546)
00:20:00.924  fused_ordering(547)
00:20:00.924  fused_ordering(548)
00:20:00.924  fused_ordering(549)
00:20:00.924  fused_ordering(550)
00:20:00.924  fused_ordering(551)
00:20:00.924  fused_ordering(552)
00:20:00.924  fused_ordering(553)
00:20:00.924  fused_ordering(554)
00:20:00.924  fused_ordering(555)
00:20:00.924  fused_ordering(556)
00:20:00.924  fused_ordering(557)
00:20:00.924  fused_ordering(558)
00:20:00.924  fused_ordering(559)
00:20:00.924  fused_ordering(560)
00:20:00.924  fused_ordering(561)
00:20:00.924  fused_ordering(562)
00:20:00.924  fused_ordering(563)
00:20:00.924  fused_ordering(564)
00:20:00.924  fused_ordering(565)
00:20:00.924  fused_ordering(566)
00:20:00.924  fused_ordering(567)
00:20:00.924  fused_ordering(568)
00:20:00.924  fused_ordering(569)
00:20:00.924  fused_ordering(570)
00:20:00.924  fused_ordering(571)
00:20:00.924  fused_ordering(572)
00:20:00.924  fused_ordering(573)
00:20:00.924  fused_ordering(574)
00:20:00.924  fused_ordering(575)
00:20:00.924  fused_ordering(576)
00:20:00.924  fused_ordering(577)
00:20:00.924  fused_ordering(578)
00:20:00.924  fused_ordering(579)
00:20:00.924  fused_ordering(580)
00:20:00.924  fused_ordering(581)
00:20:00.924  fused_ordering(582)
00:20:00.924  fused_ordering(583)
00:20:00.924  fused_ordering(584)
00:20:00.924  fused_ordering(585)
00:20:00.924  fused_ordering(586)
00:20:00.924  fused_ordering(587)
00:20:00.924  fused_ordering(588)
00:20:00.924  fused_ordering(589)
00:20:00.924  fused_ordering(590)
00:20:00.924  fused_ordering(591)
00:20:00.924  fused_ordering(592)
00:20:00.924  fused_ordering(593)
00:20:00.924  fused_ordering(594)
00:20:00.924  fused_ordering(595)
00:20:00.924  fused_ordering(596)
00:20:00.924  fused_ordering(597)
00:20:00.924  fused_ordering(598)
00:20:00.924  fused_ordering(599)
00:20:00.924  fused_ordering(600)
00:20:00.924  fused_ordering(601)
00:20:00.924  fused_ordering(602)
00:20:00.924  fused_ordering(603)
00:20:00.924  fused_ordering(604)
00:20:00.924  fused_ordering(605)
00:20:00.924  fused_ordering(606)
00:20:00.924  fused_ordering(607)
00:20:00.924  fused_ordering(608)
00:20:00.924  fused_ordering(609)
00:20:00.924  fused_ordering(610)
00:20:00.924  fused_ordering(611)
00:20:00.924  fused_ordering(612)
00:20:00.924  fused_ordering(613)
00:20:00.924  fused_ordering(614)
00:20:00.924  fused_ordering(615)
00:20:01.183  fused_ordering(616)
00:20:01.183  fused_ordering(617)
00:20:01.183  fused_ordering(618)
00:20:01.183  fused_ordering(619)
00:20:01.183  fused_ordering(620)
00:20:01.183  fused_ordering(621)
00:20:01.183  fused_ordering(622)
00:20:01.183  fused_ordering(623)
00:20:01.183  fused_ordering(624)
00:20:01.183  fused_ordering(625)
00:20:01.183  fused_ordering(626)
00:20:01.183  fused_ordering(627)
00:20:01.183  fused_ordering(628)
00:20:01.183  fused_ordering(629)
00:20:01.183  fused_ordering(630)
00:20:01.183  fused_ordering(631)
00:20:01.183  fused_ordering(632)
00:20:01.183  fused_ordering(633)
00:20:01.183  fused_ordering(634)
00:20:01.183  fused_ordering(635)
00:20:01.183  fused_ordering(636)
00:20:01.183  fused_ordering(637)
00:20:01.183  fused_ordering(638)
00:20:01.183  fused_ordering(639)
00:20:01.183  fused_ordering(640)
00:20:01.183  fused_ordering(641)
00:20:01.183  fused_ordering(642)
00:20:01.183  fused_ordering(643)
00:20:01.183  fused_ordering(644)
00:20:01.183  fused_ordering(645)
00:20:01.183  fused_ordering(646)
00:20:01.183  fused_ordering(647)
00:20:01.183  fused_ordering(648)
00:20:01.183  fused_ordering(649)
00:20:01.183  fused_ordering(650)
00:20:01.183  fused_ordering(651)
00:20:01.183  fused_ordering(652)
00:20:01.183  fused_ordering(653)
00:20:01.183  fused_ordering(654)
00:20:01.183  fused_ordering(655)
00:20:01.183  fused_ordering(656)
00:20:01.183  fused_ordering(657)
00:20:01.183  fused_ordering(658)
00:20:01.183  fused_ordering(659)
00:20:01.183  fused_ordering(660)
00:20:01.183  fused_ordering(661)
00:20:01.183  fused_ordering(662)
00:20:01.183  fused_ordering(663)
00:20:01.183  fused_ordering(664)
00:20:01.183  fused_ordering(665)
00:20:01.183  fused_ordering(666)
00:20:01.183  fused_ordering(667)
00:20:01.183  fused_ordering(668)
00:20:01.183  fused_ordering(669)
00:20:01.183  fused_ordering(670)
00:20:01.183  fused_ordering(671)
00:20:01.183  fused_ordering(672)
00:20:01.183  fused_ordering(673)
00:20:01.183  fused_ordering(674)
00:20:01.183  fused_ordering(675)
00:20:01.183  fused_ordering(676)
00:20:01.183  fused_ordering(677)
00:20:01.183  fused_ordering(678)
00:20:01.183  fused_ordering(679)
00:20:01.183  fused_ordering(680)
00:20:01.183  fused_ordering(681)
00:20:01.183  fused_ordering(682)
00:20:01.183  fused_ordering(683)
00:20:01.183  fused_ordering(684)
00:20:01.183  fused_ordering(685)
00:20:01.183  fused_ordering(686)
00:20:01.183  fused_ordering(687)
00:20:01.183  fused_ordering(688)
00:20:01.183  fused_ordering(689)
00:20:01.183  fused_ordering(690)
00:20:01.183  fused_ordering(691)
00:20:01.183  fused_ordering(692)
00:20:01.183  fused_ordering(693)
00:20:01.183  fused_ordering(694)
00:20:01.183  fused_ordering(695)
00:20:01.183  fused_ordering(696)
00:20:01.183  fused_ordering(697)
00:20:01.183  fused_ordering(698)
00:20:01.183  fused_ordering(699)
00:20:01.183  fused_ordering(700)
00:20:01.183  fused_ordering(701)
00:20:01.183  fused_ordering(702)
00:20:01.183  fused_ordering(703)
00:20:01.183  fused_ordering(704)
00:20:01.183  fused_ordering(705)
00:20:01.183  fused_ordering(706)
00:20:01.183  fused_ordering(707)
00:20:01.183  fused_ordering(708)
00:20:01.183  fused_ordering(709)
00:20:01.183  fused_ordering(710)
00:20:01.183  fused_ordering(711)
00:20:01.183  fused_ordering(712)
00:20:01.183  fused_ordering(713)
00:20:01.183  fused_ordering(714)
00:20:01.183  fused_ordering(715)
00:20:01.183  fused_ordering(716)
00:20:01.183  fused_ordering(717)
00:20:01.183  fused_ordering(718)
00:20:01.183  fused_ordering(719)
00:20:01.183  fused_ordering(720)
00:20:01.183  fused_ordering(721)
00:20:01.183  fused_ordering(722)
00:20:01.183  fused_ordering(723)
00:20:01.183  fused_ordering(724)
00:20:01.183  fused_ordering(725)
00:20:01.184  fused_ordering(726)
00:20:01.184  fused_ordering(727)
00:20:01.184  fused_ordering(728)
00:20:01.184  fused_ordering(729)
00:20:01.184  fused_ordering(730)
00:20:01.184  fused_ordering(731)
00:20:01.184  fused_ordering(732)
00:20:01.184  fused_ordering(733)
00:20:01.184  fused_ordering(734)
00:20:01.184  fused_ordering(735)
00:20:01.184  fused_ordering(736)
00:20:01.184  fused_ordering(737)
00:20:01.184  fused_ordering(738)
00:20:01.184  fused_ordering(739)
00:20:01.184  fused_ordering(740)
00:20:01.184  fused_ordering(741)
00:20:01.184  fused_ordering(742)
00:20:01.184  fused_ordering(743)
00:20:01.184  fused_ordering(744)
00:20:01.184  fused_ordering(745)
00:20:01.184  fused_ordering(746)
00:20:01.184  fused_ordering(747)
00:20:01.184  fused_ordering(748)
00:20:01.184  fused_ordering(749)
00:20:01.184  fused_ordering(750)
00:20:01.184  fused_ordering(751)
00:20:01.184  fused_ordering(752)
00:20:01.184  fused_ordering(753)
00:20:01.184  fused_ordering(754)
00:20:01.184  fused_ordering(755)
00:20:01.184  fused_ordering(756)
00:20:01.184  fused_ordering(757)
00:20:01.184  fused_ordering(758)
00:20:01.184  fused_ordering(759)
00:20:01.184  fused_ordering(760)
00:20:01.184  fused_ordering(761)
00:20:01.184  fused_ordering(762)
00:20:01.184  fused_ordering(763)
00:20:01.184  fused_ordering(764)
00:20:01.184  fused_ordering(765)
00:20:01.184  fused_ordering(766)
00:20:01.184  fused_ordering(767)
00:20:01.184  fused_ordering(768)
00:20:01.184  fused_ordering(769)
00:20:01.184  fused_ordering(770)
00:20:01.184  fused_ordering(771)
00:20:01.184  fused_ordering(772)
00:20:01.184  fused_ordering(773)
00:20:01.184  fused_ordering(774)
00:20:01.184  fused_ordering(775)
00:20:01.184  fused_ordering(776)
00:20:01.184  fused_ordering(777)
00:20:01.184  fused_ordering(778)
00:20:01.184  fused_ordering(779)
00:20:01.184  fused_ordering(780)
00:20:01.184  fused_ordering(781)
00:20:01.184  fused_ordering(782)
00:20:01.184  fused_ordering(783)
00:20:01.184  fused_ordering(784)
00:20:01.184  fused_ordering(785)
00:20:01.184  fused_ordering(786)
00:20:01.184  fused_ordering(787)
00:20:01.184  fused_ordering(788)
00:20:01.184  fused_ordering(789)
00:20:01.184  fused_ordering(790)
00:20:01.184  fused_ordering(791)
00:20:01.184  fused_ordering(792)
00:20:01.184  fused_ordering(793)
00:20:01.184  fused_ordering(794)
00:20:01.184  fused_ordering(795)
00:20:01.184  fused_ordering(796)
00:20:01.184  fused_ordering(797)
00:20:01.184  fused_ordering(798)
00:20:01.184  fused_ordering(799)
00:20:01.184  fused_ordering(800)
00:20:01.184  fused_ordering(801)
00:20:01.184  fused_ordering(802)
00:20:01.184  fused_ordering(803)
00:20:01.184  fused_ordering(804)
00:20:01.184  fused_ordering(805)
00:20:01.184  fused_ordering(806)
00:20:01.184  fused_ordering(807)
00:20:01.184  fused_ordering(808)
00:20:01.184  fused_ordering(809)
00:20:01.184  fused_ordering(810)
00:20:01.184  fused_ordering(811)
00:20:01.184  fused_ordering(812)
00:20:01.184  fused_ordering(813)
00:20:01.184  fused_ordering(814)
00:20:01.184  fused_ordering(815)
00:20:01.184  fused_ordering(816)
00:20:01.184  fused_ordering(817)
00:20:01.184  fused_ordering(818)
00:20:01.184  fused_ordering(819)
00:20:01.184  fused_ordering(820)
00:20:01.442  fused_ordering(821)
00:20:01.442  fused_ordering(822)
00:20:01.442  fused_ordering(823)
00:20:01.442  fused_ordering(824)
00:20:01.442  fused_ordering(825)
00:20:01.442  fused_ordering(826)
00:20:01.442  fused_ordering(827)
00:20:01.442  fused_ordering(828)
00:20:01.442  fused_ordering(829)
00:20:01.442  fused_ordering(830)
00:20:01.442  fused_ordering(831)
00:20:01.442  fused_ordering(832)
00:20:01.442  fused_ordering(833)
00:20:01.442  fused_ordering(834)
00:20:01.442  fused_ordering(835)
00:20:01.442  fused_ordering(836)
00:20:01.443  fused_ordering(837)
00:20:01.443  fused_ordering(838)
00:20:01.443  fused_ordering(839)
00:20:01.443  fused_ordering(840)
00:20:01.443  fused_ordering(841)
00:20:01.443  fused_ordering(842)
00:20:01.443  fused_ordering(843)
00:20:01.443  fused_ordering(844)
00:20:01.443  fused_ordering(845)
00:20:01.443  fused_ordering(846)
00:20:01.443  fused_ordering(847)
00:20:01.443  fused_ordering(848)
00:20:01.443  fused_ordering(849)
00:20:01.443  fused_ordering(850)
00:20:01.443  fused_ordering(851)
00:20:01.443  fused_ordering(852)
00:20:01.443  fused_ordering(853)
00:20:01.443  fused_ordering(854)
00:20:01.443  fused_ordering(855)
00:20:01.443  fused_ordering(856)
00:20:01.443  fused_ordering(857)
00:20:01.443  fused_ordering(858)
00:20:01.443  fused_ordering(859)
00:20:01.443  fused_ordering(860)
00:20:01.443  fused_ordering(861)
00:20:01.443  fused_ordering(862)
00:20:01.443  fused_ordering(863)
00:20:01.443  fused_ordering(864)
00:20:01.443  fused_ordering(865)
00:20:01.443  fused_ordering(866)
00:20:01.443  fused_ordering(867)
00:20:01.443  fused_ordering(868)
00:20:01.443  fused_ordering(869)
00:20:01.443  fused_ordering(870)
00:20:01.443  fused_ordering(871)
00:20:01.443  fused_ordering(872)
00:20:01.443  fused_ordering(873)
00:20:01.443  fused_ordering(874)
00:20:01.443  fused_ordering(875)
00:20:01.443  fused_ordering(876)
00:20:01.443  fused_ordering(877)
00:20:01.443  fused_ordering(878)
00:20:01.443  fused_ordering(879)
00:20:01.443  fused_ordering(880)
00:20:01.443  fused_ordering(881)
00:20:01.443  fused_ordering(882)
00:20:01.443  fused_ordering(883)
00:20:01.443  fused_ordering(884)
00:20:01.443  fused_ordering(885)
00:20:01.443  fused_ordering(886)
00:20:01.443  fused_ordering(887)
00:20:01.443  fused_ordering(888)
00:20:01.443  fused_ordering(889)
00:20:01.443  fused_ordering(890)
00:20:01.443  fused_ordering(891)
00:20:01.443  fused_ordering(892)
00:20:01.443  fused_ordering(893)
00:20:01.443  fused_ordering(894)
00:20:01.443  fused_ordering(895)
00:20:01.443  fused_ordering(896)
00:20:01.443  fused_ordering(897)
00:20:01.443  fused_ordering(898)
00:20:01.443  fused_ordering(899)
00:20:01.443  fused_ordering(900)
00:20:01.443  fused_ordering(901)
00:20:01.443  fused_ordering(902)
00:20:01.443  fused_ordering(903)
00:20:01.443  fused_ordering(904)
00:20:01.443  fused_ordering(905)
00:20:01.443  fused_ordering(906)
00:20:01.443  fused_ordering(907)
00:20:01.443  fused_ordering(908)
00:20:01.443  fused_ordering(909)
00:20:01.443  fused_ordering(910)
00:20:01.443  fused_ordering(911)
00:20:01.443  fused_ordering(912)
00:20:01.443  fused_ordering(913)
00:20:01.443  fused_ordering(914)
00:20:01.443  fused_ordering(915)
00:20:01.443  fused_ordering(916)
00:20:01.443  fused_ordering(917)
00:20:01.443  fused_ordering(918)
00:20:01.443  fused_ordering(919)
00:20:01.443  fused_ordering(920)
00:20:01.443  fused_ordering(921)
00:20:01.443  fused_ordering(922)
00:20:01.443  fused_ordering(923)
00:20:01.443  fused_ordering(924)
00:20:01.443  fused_ordering(925)
00:20:01.443  fused_ordering(926)
00:20:01.443  fused_ordering(927)
00:20:01.443  fused_ordering(928)
00:20:01.443  fused_ordering(929)
00:20:01.443  fused_ordering(930)
00:20:01.443  fused_ordering(931)
00:20:01.443  fused_ordering(932)
00:20:01.443  fused_ordering(933)
00:20:01.443  fused_ordering(934)
00:20:01.443  fused_ordering(935)
00:20:01.443  fused_ordering(936)
00:20:01.443  fused_ordering(937)
00:20:01.443  fused_ordering(938)
00:20:01.443  fused_ordering(939)
00:20:01.443  fused_ordering(940)
00:20:01.443  fused_ordering(941)
00:20:01.443  fused_ordering(942)
00:20:01.443  fused_ordering(943)
00:20:01.443  fused_ordering(944)
00:20:01.443  fused_ordering(945)
00:20:01.443  fused_ordering(946)
00:20:01.443  fused_ordering(947)
00:20:01.443  fused_ordering(948)
00:20:01.443  fused_ordering(949)
00:20:01.443  fused_ordering(950)
00:20:01.443  fused_ordering(951)
00:20:01.443  fused_ordering(952)
00:20:01.443  fused_ordering(953)
00:20:01.443  fused_ordering(954)
00:20:01.443  fused_ordering(955)
00:20:01.443  fused_ordering(956)
00:20:01.443  fused_ordering(957)
00:20:01.443  fused_ordering(958)
00:20:01.443  fused_ordering(959)
00:20:01.443  fused_ordering(960)
00:20:01.443  fused_ordering(961)
00:20:01.443  fused_ordering(962)
00:20:01.443  fused_ordering(963)
00:20:01.443  fused_ordering(964)
00:20:01.443  fused_ordering(965)
00:20:01.443  fused_ordering(966)
00:20:01.443  fused_ordering(967)
00:20:01.443  fused_ordering(968)
00:20:01.443  fused_ordering(969)
00:20:01.443  fused_ordering(970)
00:20:01.443  fused_ordering(971)
00:20:01.443  fused_ordering(972)
00:20:01.443  fused_ordering(973)
00:20:01.443  fused_ordering(974)
00:20:01.443  fused_ordering(975)
00:20:01.443  fused_ordering(976)
00:20:01.443  fused_ordering(977)
00:20:01.443  fused_ordering(978)
00:20:01.443  fused_ordering(979)
00:20:01.443  fused_ordering(980)
00:20:01.443  fused_ordering(981)
00:20:01.443  fused_ordering(982)
00:20:01.443  fused_ordering(983)
00:20:01.443  fused_ordering(984)
00:20:01.443  fused_ordering(985)
00:20:01.443  fused_ordering(986)
00:20:01.443  fused_ordering(987)
00:20:01.443  fused_ordering(988)
00:20:01.443  fused_ordering(989)
00:20:01.443  fused_ordering(990)
00:20:01.443  fused_ordering(991)
00:20:01.443  fused_ordering(992)
00:20:01.443  fused_ordering(993)
00:20:01.443  fused_ordering(994)
00:20:01.443  fused_ordering(995)
00:20:01.443  fused_ordering(996)
00:20:01.443  fused_ordering(997)
00:20:01.443  fused_ordering(998)
00:20:01.443  fused_ordering(999)
00:20:01.443  fused_ordering(1000)
00:20:01.443  fused_ordering(1001)
00:20:01.443  fused_ordering(1002)
00:20:01.443  fused_ordering(1003)
00:20:01.443  fused_ordering(1004)
00:20:01.443  fused_ordering(1005)
00:20:01.443  fused_ordering(1006)
00:20:01.443  fused_ordering(1007)
00:20:01.443  fused_ordering(1008)
00:20:01.443  fused_ordering(1009)
00:20:01.443  fused_ordering(1010)
00:20:01.443  fused_ordering(1011)
00:20:01.443  fused_ordering(1012)
00:20:01.443  fused_ordering(1013)
00:20:01.443  fused_ordering(1014)
00:20:01.443  fused_ordering(1015)
00:20:01.443  fused_ordering(1016)
00:20:01.443  fused_ordering(1017)
00:20:01.443  fused_ordering(1018)
00:20:01.443  fused_ordering(1019)
00:20:01.443  fused_ordering(1020)
00:20:01.443  fused_ordering(1021)
00:20:01.443  fused_ordering(1022)
00:20:01.443  fused_ordering(1023)
00:20:01.443   13:46:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT
00:20:01.443   13:46:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini
00:20:01.443   13:46:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup
00:20:01.443   13:46:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync
00:20:01.443   13:46:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' rdma == tcp ']'
00:20:01.443   13:46:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' rdma == rdma ']'
00:20:01.443   13:46:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e
00:20:01.443   13:46:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20}
00:20:01.443   13:46:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma
00:20:01.443  rmmod nvme_rdma
00:20:01.443  rmmod nvme_fabrics
00:20:01.443   13:46:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:20:01.443   13:46:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e
00:20:01.443   13:46:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0
00:20:01.443   13:46:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 3328846 ']'
00:20:01.443   13:46:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 3328846
00:20:01.443   13:46:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 3328846 ']'
00:20:01.443   13:46:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 3328846
00:20:01.443    13:46:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname
00:20:01.443   13:46:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:20:01.443    13:46:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3328846
00:20:01.443   13:46:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:20:01.443   13:46:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:20:01.443   13:46:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3328846'
00:20:01.443  killing process with pid 3328846
00:20:01.443   13:46:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 3328846
00:20:01.443   13:46:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 3328846
00:20:02.819   13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:20:02.819   13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]]
00:20:02.819  
00:20:02.819  real	0m10.173s
00:20:02.819  user	0m6.199s
00:20:02.819  sys	0m5.634s
00:20:02.819   13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable
00:20:02.819   13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x
00:20:02.819  ************************************
00:20:02.819  END TEST nvmf_fused_ordering
00:20:02.819  ************************************
00:20:02.819   13:46:02 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=rdma
00:20:02.819   13:46:02 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:20:02.819   13:46:02 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:20:02.819   13:46:02 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:20:02.819  ************************************
00:20:02.819  START TEST nvmf_ns_masking
00:20:02.819  ************************************
00:20:02.819   13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=rdma
00:20:02.819  * Looking for test storage...
00:20:02.819  * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target
00:20:02.819    13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:20:02.819     13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version
00:20:02.819     13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:20:03.078    13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:20:03.078    13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:20:03.078    13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l
00:20:03.078    13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l
00:20:03.078    13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-:
00:20:03.078    13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1
00:20:03.078    13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-:
00:20:03.078    13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2
00:20:03.078    13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<'
00:20:03.078    13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2
00:20:03.078    13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1
00:20:03.078    13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:20:03.078    13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in
00:20:03.078    13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1
00:20:03.078    13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 ))
00:20:03.078    13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:20:03.078     13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1
00:20:03.078     13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1
00:20:03.078     13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:20:03.078     13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1
00:20:03.078    13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1
00:20:03.078     13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2
00:20:03.078     13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2
00:20:03.078     13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:20:03.078     13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2
00:20:03.078    13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2
00:20:03.078    13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:20:03.078    13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:20:03.078    13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0
00:20:03.078    13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:20:03.078    13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:20:03.078  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:03.078  		--rc genhtml_branch_coverage=1
00:20:03.078  		--rc genhtml_function_coverage=1
00:20:03.078  		--rc genhtml_legend=1
00:20:03.079  		--rc geninfo_all_blocks=1
00:20:03.079  		--rc geninfo_unexecuted_blocks=1
00:20:03.079  		
00:20:03.079  		'
00:20:03.079    13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:20:03.079  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:03.079  		--rc genhtml_branch_coverage=1
00:20:03.079  		--rc genhtml_function_coverage=1
00:20:03.079  		--rc genhtml_legend=1
00:20:03.079  		--rc geninfo_all_blocks=1
00:20:03.079  		--rc geninfo_unexecuted_blocks=1
00:20:03.079  		
00:20:03.079  		'
00:20:03.079    13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:20:03.079  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:03.079  		--rc genhtml_branch_coverage=1
00:20:03.079  		--rc genhtml_function_coverage=1
00:20:03.079  		--rc genhtml_legend=1
00:20:03.079  		--rc geninfo_all_blocks=1
00:20:03.079  		--rc geninfo_unexecuted_blocks=1
00:20:03.079  		
00:20:03.079  		'
00:20:03.079    13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:20:03.079  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:03.079  		--rc genhtml_branch_coverage=1
00:20:03.079  		--rc genhtml_function_coverage=1
00:20:03.079  		--rc genhtml_legend=1
00:20:03.079  		--rc geninfo_all_blocks=1
00:20:03.079  		--rc geninfo_unexecuted_blocks=1
00:20:03.079  		
00:20:03.079  		'
00:20:03.079   13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh
00:20:03.079     13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s
00:20:03.079    13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:20:03.079    13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:20:03.079    13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:20:03.079    13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:20:03.079    13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:20:03.079    13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:20:03.079    13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:20:03.079    13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:20:03.079    13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:20:03.079     13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:20:03.079    13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:20:03.079    13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e
00:20:03.079    13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:20:03.079    13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:20:03.079    13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:20:03.079    13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:20:03.079    13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh
00:20:03.079     13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob
00:20:03.079     13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:20:03.079     13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:20:03.079     13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:20:03.079      13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:20:03.079      13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:20:03.079      13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:20:03.079      13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH
00:20:03.079      13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:20:03.079    13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0
00:20:03.079    13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:20:03.079    13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:20:03.079    13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:20:03.079    13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:20:03.079    13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:20:03.079    13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:20:03.079  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:20:03.079    13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:20:03.079    13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:20:03.079    13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0
00:20:03.079   13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py
00:20:03.079   13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock
00:20:03.079   13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5
00:20:03.079    13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen
00:20:03.079   13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=3e4401c2-7d65-4bc6-bc88-fb8dacd7a95b
00:20:03.079    13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen
00:20:03.079   13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=193878ab-0406-467f-935d-269d050cbbc1
00:20:03.079   13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1
00:20:03.079   13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1
00:20:03.079   13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2
00:20:03.079    13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen
00:20:03.079   13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=124b9293-ef72-4adb-a6a1-ddcfc54906d5
00:20:03.079   13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit
00:20:03.079   13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z rdma ']'
00:20:03.079   13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:20:03.079   13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs
00:20:03.079   13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no
00:20:03.079   13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns
00:20:03.079   13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:20:03.079   13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:20:03.079    13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:20:03.079   13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:20:03.079   13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:20:03.079   13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable
00:20:03.079   13:46:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x
00:20:09.644   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:20:09.644   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=()
00:20:09.644   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs
00:20:09.644   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=()
00:20:09.644   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:20:09.644   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=()
00:20:09.644   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers
00:20:09.644   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=()
00:20:09.644   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs
00:20:09.644   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=()
00:20:09.645   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810
00:20:09.645   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=()
00:20:09.645   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722
00:20:09.645   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=()
00:20:09.645   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx
00:20:09.645   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:20:09.645   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:20:09.645   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:20:09.645   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:20:09.645   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:20:09.645   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:20:09.645   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:20:09.645   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:20:09.645   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:20:09.645   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:20:09.645   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:20:09.645   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:20:09.645   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:20:09.645   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ rdma == rdma ]]
00:20:09.645   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}")
00:20:09.645   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}")
00:20:09.645   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]]
00:20:09.645   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}")
00:20:09.645   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:20:09.645   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:20:09.645   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)'
00:20:09.645  Found 0000:d9:00.0 (0x15b3 - 0x1015)
00:20:09.645   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:20:09.645   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:20:09.645   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:20:09.645   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:20:09.645   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:20:09.645   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:20:09.645   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:20:09.645   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)'
00:20:09.645  Found 0000:d9:00.1 (0x15b3 - 0x1015)
00:20:09.645   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:20:09.645   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:20:09.645   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:20:09.645   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:20:09.645   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:20:09.645   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:20:09.645   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:20:09.645   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]]
00:20:09.645   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:20:09.645   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:20:09.645   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:20:09.645   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:20:09.645   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:20:09.645   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0'
00:20:09.645  Found net devices under 0000:d9:00.0: mlx_0_0
00:20:09.645   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:20:09.645   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:20:09.645   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:20:09.645   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:20:09.645   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:20:09.645   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:20:09.645   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1'
00:20:09.645  Found net devices under 0000:d9:00.1: mlx_0_1
00:20:09.645   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:20:09.645   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:20:09.645   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes
00:20:09.645   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:20:09.645   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ rdma == tcp ]]
00:20:09.645   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@447 -- # [[ rdma == rdma ]]
00:20:09.645   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # rdma_device_init
00:20:09.645   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@529 -- # load_ib_rdma_modules
00:20:09.645    13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@62 -- # uname
00:20:09.645   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']'
00:20:09.645   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@66 -- # modprobe ib_cm
00:20:09.645   13:46:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@67 -- # modprobe ib_core
00:20:09.645   13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@68 -- # modprobe ib_umad
00:20:09.645   13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@69 -- # modprobe ib_uverbs
00:20:09.645   13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@70 -- # modprobe iw_cm
00:20:09.645   13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@71 -- # modprobe rdma_cm
00:20:09.645   13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@72 -- # modprobe rdma_ucm
00:20:09.645   13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@530 -- # allocate_nic_ips
00:20:09.645   13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR ))
00:20:09.645    13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # get_rdma_if_list
00:20:09.645    13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:20:09.645    13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:20:09.645     13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:20:09.645     13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:20:09.645    13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:20:09.645    13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:20:09.645    13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:20:09.645    13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:20:09.645    13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_0
00:20:09.645    13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2
00:20:09.645    13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:20:09.645    13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:20:09.645    13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:20:09.645    13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:20:09.645    13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:20:09.645    13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_1
00:20:09.645    13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2
00:20:09.645   13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:20:09.645    13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0
00:20:09.645    13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:20:09.645    13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:20:09.645    13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}'
00:20:09.645    13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1
00:20:09.645   13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # ip=192.168.100.8
00:20:09.645   13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]]
00:20:09.645   13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@85 -- # ip addr show mlx_0_0
00:20:09.645  6: mlx_0_0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:20:09.645      link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff
00:20:09.645      altname enp217s0f0np0
00:20:09.645      altname ens818f0np0
00:20:09.645      inet 192.168.100.8/24 scope global mlx_0_0
00:20:09.645         valid_lft forever preferred_lft forever
00:20:09.645   13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:20:09.645    13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1
00:20:09.645    13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:20:09.645    13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:20:09.646    13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}'
00:20:09.646    13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1
00:20:09.646   13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # ip=192.168.100.9
00:20:09.646   13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]]
00:20:09.646   13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@85 -- # ip addr show mlx_0_1
00:20:09.646  7: mlx_0_1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:20:09.646      link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff
00:20:09.646      altname enp217s0f1np1
00:20:09.646      altname ens818f1np1
00:20:09.646      inet 192.168.100.9/24 scope global mlx_0_1
00:20:09.646         valid_lft forever preferred_lft forever
00:20:09.646   13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0
00:20:09.646   13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:20:09.646   13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma'
00:20:09.646   13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]]
00:20:09.646    13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # get_available_rdma_ips
00:20:09.646     13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # get_rdma_if_list
00:20:09.646     13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:20:09.646     13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:20:09.646      13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:20:09.646      13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:20:09.646     13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:20:09.646     13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:20:09.646     13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:20:09.646     13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:20:09.646     13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_0
00:20:09.646     13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2
00:20:09.646     13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:20:09.646     13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:20:09.646     13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:20:09.646     13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:20:09.646     13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:20:09.646     13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_1
00:20:09.646     13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2
00:20:09.646    13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:20:09.646    13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0
00:20:09.646    13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:20:09.646    13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:20:09.646    13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}'
00:20:09.646    13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1
00:20:09.646    13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:20:09.646    13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1
00:20:09.646    13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:20:09.646    13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:20:09.646    13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}'
00:20:09.646    13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1
00:20:09.646   13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8
00:20:09.646  192.168.100.9'
00:20:09.646    13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # echo '192.168.100.8
00:20:09.646  192.168.100.9'
00:20:09.646    13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # head -n 1
00:20:09.646   13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8
00:20:09.646    13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # head -n 1
00:20:09.646    13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # echo '192.168.100.8
00:20:09.646  192.168.100.9'
00:20:09.646    13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # tail -n +2
00:20:09.646   13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9
00:20:09.646   13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']'
00:20:09.646   13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024'
00:20:09.646   13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' rdma == tcp ']'
00:20:09.646   13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' rdma == rdma ']'
00:20:09.646   13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-rdma
00:20:09.646   13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart
00:20:09.646   13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:20:09.646   13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable
00:20:09.646   13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x
00:20:09.646   13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=3332755
00:20:09.646   13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 3332755
00:20:09.646   13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF
00:20:09.646   13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 3332755 ']'
00:20:09.646   13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:20:09.646   13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100
00:20:09.646   13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:20:09.646  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:20:09.646   13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable
00:20:09.646   13:46:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x
00:20:09.646  [2024-12-14 13:46:09.310787] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:20:09.646  [2024-12-14 13:46:09.310880] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:20:09.905  [2024-12-14 13:46:09.443263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:20:09.905  [2024-12-14 13:46:09.539987] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:20:09.905  [2024-12-14 13:46:09.540040] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:20:09.905  [2024-12-14 13:46:09.540053] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:20:09.905  [2024-12-14 13:46:09.540083] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:20:09.905  [2024-12-14 13:46:09.540094] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:20:09.905  [2024-12-14 13:46:09.541418] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:20:10.472   13:46:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:20:10.472   13:46:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0
00:20:10.472   13:46:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:20:10.472   13:46:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable
00:20:10.472   13:46:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x
00:20:10.472   13:46:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:20:10.472   13:46:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192
00:20:10.730  [2024-12-14 13:46:10.344530] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028540/0x7f3f7a13b940) succeed.
00:20:10.730  [2024-12-14 13:46:10.354055] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000286c0/0x7f3f79fbd940) succeed.
00:20:10.730   13:46:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64
00:20:10.730   13:46:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512
00:20:10.730   13:46:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1
00:20:10.988  Malloc1
00:20:10.988   13:46:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2
00:20:11.246  Malloc2
00:20:11.246   13:46:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME
00:20:11.504   13:46:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1
00:20:11.762   13:46:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420
00:20:11.762  [2024-12-14 13:46:11.484747] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 ***
00:20:12.020   13:46:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect
00:20:12.020   13:46:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 124b9293-ef72-4adb-a6a1-ddcfc54906d5 -a 192.168.100.8 -s 4420 -i 4
00:20:12.279   13:46:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME
00:20:12.279   13:46:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0
00:20:12.279   13:46:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:20:12.279   13:46:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:20:12.279   13:46:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2
00:20:14.231   13:46:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:20:14.231    13:46:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:20:14.231    13:46:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:20:14.231   13:46:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:20:14.231   13:46:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:20:14.231   13:46:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0
00:20:14.231    13:46:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json
00:20:14.231    13:46:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name'
00:20:14.231   13:46:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0
00:20:14.231   13:46:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]]
00:20:14.231   13:46:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1
00:20:14.231   13:46:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0
00:20:14.231   13:46:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1
00:20:14.231  [   0]:0x1
00:20:14.231    13:46:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json
00:20:14.231    13:46:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid
00:20:14.231   13:46:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=64e4afb360414c6b87c0b2cf36f07522
00:20:14.231   13:46:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 64e4afb360414c6b87c0b2cf36f07522 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]]
00:20:14.231   13:46:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2
00:20:14.497   13:46:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1
00:20:14.497   13:46:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0
00:20:14.497   13:46:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1
00:20:14.497  [   0]:0x1
00:20:14.497    13:46:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json
00:20:14.497    13:46:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid
00:20:14.497   13:46:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=64e4afb360414c6b87c0b2cf36f07522
00:20:14.497   13:46:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 64e4afb360414c6b87c0b2cf36f07522 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]]
00:20:14.497   13:46:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2
00:20:14.497   13:46:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2
00:20:14.497   13:46:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0
00:20:14.497  [   1]:0x2
00:20:14.497    13:46:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json
00:20:14.497    13:46:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid
00:20:14.755   13:46:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9b9f15e76dcc40dfbf37100bd59ce67c
00:20:14.755   13:46:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9b9f15e76dcc40dfbf37100bd59ce67c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]]
00:20:14.755   13:46:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect
00:20:14.755   13:46:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:20:15.013  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:20:15.013   13:46:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:20:15.271   13:46:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible
00:20:15.271   13:46:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1
00:20:15.271   13:46:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 124b9293-ef72-4adb-a6a1-ddcfc54906d5 -a 192.168.100.8 -s 4420 -i 4
00:20:15.836   13:46:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1
00:20:15.836   13:46:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0
00:20:15.836   13:46:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:20:15.836   13:46:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]]
00:20:15.836   13:46:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1
00:20:15.836   13:46:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2
00:20:17.736   13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:20:17.736    13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:20:17.736    13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:20:17.736   13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:20:17.736   13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:20:17.736   13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0
00:20:17.736    13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json
00:20:17.736    13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name'
00:20:17.736   13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0
00:20:17.736   13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]]
00:20:17.736   13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1
00:20:17.736   13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0
00:20:17.736   13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1
00:20:17.736   13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible
00:20:17.736   13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:20:17.736    13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible
00:20:17.736   13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:20:17.736   13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1
00:20:17.736   13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0
00:20:17.736   13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1
00:20:17.736    13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json
00:20:17.736    13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid
00:20:17.736   13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000
00:20:17.736   13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]]
00:20:17.736   13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1
00:20:17.736   13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:20:17.736   13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:20:17.736   13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:20:17.736   13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2
00:20:17.736   13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0
00:20:17.736   13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2
00:20:17.736  [   0]:0x2
00:20:17.736    13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid
00:20:17.736    13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json
00:20:17.736   13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9b9f15e76dcc40dfbf37100bd59ce67c
00:20:17.736   13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9b9f15e76dcc40dfbf37100bd59ce67c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]]
00:20:17.736   13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1
00:20:17.995   13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1
00:20:17.995   13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0
00:20:17.995   13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1
00:20:17.995  [   0]:0x1
00:20:17.995    13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json
00:20:17.995    13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid
00:20:17.995   13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=64e4afb360414c6b87c0b2cf36f07522
00:20:17.995   13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 64e4afb360414c6b87c0b2cf36f07522 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]]
00:20:17.995   13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2
00:20:17.995   13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0
00:20:17.995   13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2
00:20:17.995  [   1]:0x2
00:20:17.995    13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid
00:20:17.995    13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json
00:20:18.253   13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9b9f15e76dcc40dfbf37100bd59ce67c
00:20:18.253   13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9b9f15e76dcc40dfbf37100bd59ce67c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]]
00:20:18.253   13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1
00:20:18.253   13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1
00:20:18.253   13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0
00:20:18.253   13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1
00:20:18.253   13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible
00:20:18.253   13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:20:18.253    13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible
00:20:18.253   13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:20:18.253   13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1
00:20:18.253   13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1
00:20:18.253   13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0
00:20:18.253    13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json
00:20:18.253    13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid
00:20:18.512   13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000
00:20:18.512   13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]]
00:20:18.512   13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1
00:20:18.512   13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:20:18.512   13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:20:18.512   13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:20:18.512   13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2
00:20:18.512   13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0
00:20:18.512   13:46:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2
00:20:18.512  [   0]:0x2
00:20:18.512    13:46:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json
00:20:18.512    13:46:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid
00:20:18.512   13:46:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9b9f15e76dcc40dfbf37100bd59ce67c
00:20:18.512   13:46:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9b9f15e76dcc40dfbf37100bd59ce67c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]]
00:20:18.512   13:46:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect
00:20:18.512   13:46:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:20:18.770  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:20:18.770   13:46:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1
00:20:19.028   13:46:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2
00:20:19.028   13:46:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 124b9293-ef72-4adb-a6a1-ddcfc54906d5 -a 192.168.100.8 -s 4420 -i 4
00:20:19.286   13:46:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2
00:20:19.286   13:46:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0
00:20:19.286   13:46:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:20:19.286   13:46:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]]
00:20:19.286   13:46:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2
00:20:19.286   13:46:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2
00:20:21.183   13:46:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:20:21.183    13:46:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:20:21.183    13:46:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:20:21.184   13:46:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2
00:20:21.184   13:46:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:20:21.184   13:46:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0
00:20:21.184    13:46:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json
00:20:21.184    13:46:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name'
00:20:21.441   13:46:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0
00:20:21.441   13:46:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]]
00:20:21.441   13:46:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1
00:20:21.441   13:46:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0
00:20:21.441   13:46:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1
00:20:21.441  [   0]:0x1
00:20:21.441    13:46:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json
00:20:21.441    13:46:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid
00:20:21.441   13:46:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=64e4afb360414c6b87c0b2cf36f07522
00:20:21.441   13:46:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 64e4afb360414c6b87c0b2cf36f07522 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]]
00:20:21.441   13:46:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2
00:20:21.441   13:46:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0
00:20:21.441   13:46:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2
00:20:21.441  [   1]:0x2
00:20:21.441    13:46:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json
00:20:21.441    13:46:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid
00:20:21.441   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9b9f15e76dcc40dfbf37100bd59ce67c
00:20:21.441   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9b9f15e76dcc40dfbf37100bd59ce67c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]]
00:20:21.441   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1
00:20:21.697   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1
00:20:21.697   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0
00:20:21.697   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1
00:20:21.697   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible
00:20:21.697   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:20:21.697    13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible
00:20:21.697   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:20:21.697   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1
00:20:21.697   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0
00:20:21.697   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1
00:20:21.697    13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json
00:20:21.697    13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid
00:20:21.697   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000
00:20:21.697   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]]
00:20:21.697   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1
00:20:21.697   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:20:21.697   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:20:21.697   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:20:21.697   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2
00:20:21.697   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0
00:20:21.697   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2
00:20:21.697  [   0]:0x2
00:20:21.697    13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid
00:20:21.697    13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json
00:20:21.697   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9b9f15e76dcc40dfbf37100bd59ce67c
00:20:21.697   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9b9f15e76dcc40dfbf37100bd59ce67c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]]
00:20:21.697   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1
00:20:21.697   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0
00:20:21.697   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1
00:20:21.697   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py
00:20:21.697   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:20:21.697    13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py
00:20:21.697   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:20:21.697    13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py
00:20:21.697   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:20:21.697   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py
00:20:21.697   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]]
00:20:21.697   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1
00:20:21.955  [2024-12-14 13:46:21.501001] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2
00:20:21.955  request:
00:20:21.955  {
00:20:21.955    "nqn": "nqn.2016-06.io.spdk:cnode1",
00:20:21.955    "nsid": 2,
00:20:21.955    "host": "nqn.2016-06.io.spdk:host1",
00:20:21.955    "method": "nvmf_ns_remove_host",
00:20:21.955    "req_id": 1
00:20:21.955  }
00:20:21.955  Got JSON-RPC error response
00:20:21.955  response:
00:20:21.955  {
00:20:21.955    "code": -32602,
00:20:21.955    "message": "Invalid parameters"
00:20:21.955  }
00:20:21.955   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1
00:20:21.955   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:20:21.955   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:20:21.955   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:20:21.955   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1
00:20:21.955   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0
00:20:21.955   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1
00:20:21.955   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible
00:20:21.955   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:20:21.955    13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible
00:20:21.955   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:20:21.955   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1
00:20:21.955   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0
00:20:21.955   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1
00:20:21.955    13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid
00:20:21.955    13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json
00:20:21.955   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000
00:20:21.955   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]]
00:20:21.955   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1
00:20:21.955   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:20:21.955   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:20:21.955   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:20:21.955   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2
00:20:21.955   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0
00:20:21.955   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2
00:20:21.955  [   0]:0x2
00:20:21.955    13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json
00:20:21.955    13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid
00:20:21.955   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9b9f15e76dcc40dfbf37100bd59ce67c
00:20:21.955   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9b9f15e76dcc40dfbf37100bd59ce67c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]]
00:20:21.955   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect
00:20:21.955   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:20:22.214  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:20:22.214   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3335097
00:20:22.214   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT
00:20:22.214   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3335097 /var/tmp/host.sock
00:20:22.214   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 3335097 ']'
00:20:22.214   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock
00:20:22.214   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100
00:20:22.214   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...'
00:20:22.214  Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...
00:20:22.214   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2
00:20:22.214   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable
00:20:22.214   13:46:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x
00:20:22.472  [2024-12-14 13:46:22.039648] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:20:22.472  [2024-12-14 13:46:22.039760] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3335097 ]
00:20:22.472  [2024-12-14 13:46:22.171807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:20:22.730  [2024-12-14 13:46:22.271921] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:20:23.296   13:46:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:20:23.296   13:46:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0
00:20:23.296   13:46:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:20:23.554   13:46:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:20:23.812    13:46:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 3e4401c2-7d65-4bc6-bc88-fb8dacd7a95b
00:20:23.812    13:46:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d -
00:20:23.812   13:46:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 3E4401C27D654BC6BC88FB8DACD7A95B -i
00:20:24.070    13:46:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 193878ab-0406-467f-935d-269d050cbbc1
00:20:24.070    13:46:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d -
00:20:24.070   13:46:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 193878AB0406467F935D269D050CBBC1 -i
00:20:24.070   13:46:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1
00:20:24.328   13:46:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2
00:20:24.586   13:46:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0
00:20:24.586   13:46:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0
00:20:24.844  nvme0n1
00:20:24.844   13:46:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1
00:20:24.844   13:46:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1
00:20:25.102  nvme1n2
00:20:25.102    13:46:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name'
00:20:25.102    13:46:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs
00:20:25.102    13:46:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs
00:20:25.102    13:46:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort
00:20:25.102    13:46:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs
00:20:25.360   13:46:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]]
00:20:25.360    13:46:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1
00:20:25.360    13:46:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid'
00:20:25.360    13:46:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1
00:20:25.360   13:46:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 3e4401c2-7d65-4bc6-bc88-fb8dacd7a95b == \3\e\4\4\0\1\c\2\-\7\d\6\5\-\4\b\c\6\-\b\c\8\8\-\f\b\8\d\a\c\d\7\a\9\5\b ]]
00:20:25.360    13:46:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2
00:20:25.360    13:46:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2
00:20:25.360    13:46:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid'
00:20:25.617   13:46:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 193878ab-0406-467f-935d-269d050cbbc1 == \1\9\3\8\7\8\a\b\-\0\4\0\6\-\4\6\7\f\-\9\3\5\d\-\2\6\9\d\0\5\0\c\b\b\c\1 ]]
00:20:25.617   13:46:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:20:25.874   13:46:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:20:26.132    13:46:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 3e4401c2-7d65-4bc6-bc88-fb8dacd7a95b
00:20:26.132    13:46:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d -
00:20:26.132   13:46:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 3E4401C27D654BC6BC88FB8DACD7A95B
00:20:26.132   13:46:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0
00:20:26.132   13:46:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 3E4401C27D654BC6BC88FB8DACD7A95B
00:20:26.132   13:46:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py
00:20:26.132   13:46:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:20:26.132    13:46:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py
00:20:26.132   13:46:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:20:26.132    13:46:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py
00:20:26.132   13:46:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:20:26.132   13:46:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py
00:20:26.132   13:46:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]]
00:20:26.132   13:46:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 3E4401C27D654BC6BC88FB8DACD7A95B
00:20:26.132  [2024-12-14 13:46:25.812448] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid
00:20:26.132  [2024-12-14 13:46:25.812493] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19
00:20:26.132  [2024-12-14 13:46:25.812509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:20:26.132  request:
00:20:26.132  {
00:20:26.132    "nqn": "nqn.2016-06.io.spdk:cnode1",
00:20:26.132    "namespace": {
00:20:26.132      "bdev_name": "invalid",
00:20:26.132      "nsid": 1,
00:20:26.132      "nguid": "3E4401C27D654BC6BC88FB8DACD7A95B",
00:20:26.132      "no_auto_visible": false,
00:20:26.132      "hide_metadata": false
00:20:26.132    },
00:20:26.132    "method": "nvmf_subsystem_add_ns",
00:20:26.132    "req_id": 1
00:20:26.132  }
00:20:26.132  Got JSON-RPC error response
00:20:26.132  response:
00:20:26.132  {
00:20:26.132    "code": -32602,
00:20:26.132    "message": "Invalid parameters"
00:20:26.132  }
00:20:26.132   13:46:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1
00:20:26.132   13:46:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:20:26.132   13:46:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:20:26.132   13:46:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:20:26.132    13:46:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 3e4401c2-7d65-4bc6-bc88-fb8dacd7a95b
00:20:26.132    13:46:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d -
00:20:26.132   13:46:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 3E4401C27D654BC6BC88FB8DACD7A95B -i
00:20:26.390   13:46:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s
00:20:28.288    13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs
00:20:28.288    13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length
00:20:28.288    13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs
00:20:28.544   13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 ))
00:20:28.544   13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 3335097
00:20:28.544   13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 3335097 ']'
00:20:28.544   13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 3335097
00:20:28.544    13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname
00:20:28.544   13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:20:28.544    13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3335097
00:20:28.800   13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:20:28.800   13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:20:28.800   13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3335097'
00:20:28.800  killing process with pid 3335097
00:20:28.800   13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 3335097
00:20:28.800   13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 3335097
00:20:31.326   13:46:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:20:31.326   13:46:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT
00:20:31.326   13:46:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini
00:20:31.326   13:46:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup
00:20:31.326   13:46:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync
00:20:31.327   13:46:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' rdma == tcp ']'
00:20:31.327   13:46:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' rdma == rdma ']'
00:20:31.327   13:46:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e
00:20:31.327   13:46:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20}
00:20:31.327   13:46:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma
00:20:31.327  rmmod nvme_rdma
00:20:31.327  rmmod nvme_fabrics
00:20:31.327   13:46:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:20:31.327   13:46:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e
00:20:31.327   13:46:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0
00:20:31.327   13:46:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 3332755 ']'
00:20:31.327   13:46:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 3332755
00:20:31.327   13:46:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 3332755 ']'
00:20:31.327   13:46:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 3332755
00:20:31.327    13:46:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname
00:20:31.327   13:46:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:20:31.327    13:46:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3332755
00:20:31.327   13:46:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:20:31.327   13:46:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:20:31.327   13:46:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3332755'
00:20:31.327  killing process with pid 3332755
00:20:31.327   13:46:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 3332755
00:20:31.327   13:46:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 3332755
00:20:32.699   13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:20:32.699   13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]]
00:20:32.699  
00:20:32.699  real	0m29.915s
00:20:32.699  user	0m38.953s
00:20:32.699  sys	0m7.640s
00:20:32.699   13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable
00:20:32.699   13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x
00:20:32.699  ************************************
00:20:32.699  END TEST nvmf_ns_masking
00:20:32.699  ************************************
00:20:32.699   13:46:32 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]]
00:20:32.699   13:46:32 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma
00:20:32.699   13:46:32 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:20:32.699   13:46:32 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:20:32.699   13:46:32 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:20:32.700  ************************************
00:20:32.700  START TEST nvmf_nvme_cli
00:20:32.700  ************************************
00:20:32.700   13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma
00:20:32.958  * Looking for test storage...
00:20:32.958  * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target
00:20:32.958    13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:20:32.958     13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version
00:20:32.958     13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:20:32.958    13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:20:32.958    13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:20:32.958    13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l
00:20:32.958    13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l
00:20:32.958    13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-:
00:20:32.958    13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1
00:20:32.958    13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-:
00:20:32.958    13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2
00:20:32.958    13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<'
00:20:32.958    13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2
00:20:32.958    13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1
00:20:32.958    13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:20:32.958    13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in
00:20:32.958    13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1
00:20:32.958    13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 ))
00:20:32.958    13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:20:32.958     13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1
00:20:32.958     13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1
00:20:32.958     13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:20:32.958     13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1
00:20:32.958    13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1
00:20:32.958     13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2
00:20:32.958     13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2
00:20:32.958     13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:20:32.958     13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2
00:20:32.958    13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2
00:20:32.958    13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:20:32.958    13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:20:32.958    13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0
00:20:32.958    13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:20:32.958    13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:20:32.958  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:32.958  		--rc genhtml_branch_coverage=1
00:20:32.958  		--rc genhtml_function_coverage=1
00:20:32.958  		--rc genhtml_legend=1
00:20:32.958  		--rc geninfo_all_blocks=1
00:20:32.958  		--rc geninfo_unexecuted_blocks=1
00:20:32.958  		
00:20:32.958  		'
00:20:32.958    13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:20:32.958  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:32.958  		--rc genhtml_branch_coverage=1
00:20:32.958  		--rc genhtml_function_coverage=1
00:20:32.958  		--rc genhtml_legend=1
00:20:32.958  		--rc geninfo_all_blocks=1
00:20:32.958  		--rc geninfo_unexecuted_blocks=1
00:20:32.958  		
00:20:32.958  		'
00:20:32.958    13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:20:32.958  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:32.958  		--rc genhtml_branch_coverage=1
00:20:32.958  		--rc genhtml_function_coverage=1
00:20:32.958  		--rc genhtml_legend=1
00:20:32.958  		--rc geninfo_all_blocks=1
00:20:32.958  		--rc geninfo_unexecuted_blocks=1
00:20:32.958  		
00:20:32.958  		'
00:20:32.958    13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:20:32.958  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:32.958  		--rc genhtml_branch_coverage=1
00:20:32.958  		--rc genhtml_function_coverage=1
00:20:32.958  		--rc genhtml_legend=1
00:20:32.958  		--rc geninfo_all_blocks=1
00:20:32.958  		--rc geninfo_unexecuted_blocks=1
00:20:32.958  		
00:20:32.958  		'
00:20:32.958   13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh
00:20:32.958     13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s
00:20:32.958    13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:20:32.958    13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:20:32.958    13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:20:32.958    13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:20:32.958    13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:20:32.958    13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:20:32.958    13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:20:32.958    13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:20:32.958    13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:20:32.958     13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:20:32.958    13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:20:32.958    13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e
00:20:32.958    13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:20:32.958    13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:20:32.958    13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:20:32.958    13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:20:32.958    13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh
00:20:32.958     13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob
00:20:32.958     13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:20:32.958     13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:20:32.958     13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:20:32.958      13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:20:32.958      13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:20:32.959      13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:20:32.959      13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH
00:20:32.959      13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:20:32.959    13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0
00:20:32.959    13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:20:32.959    13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:20:32.959    13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:20:32.959    13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:20:32.959    13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:20:32.959    13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:20:32.959  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:20:32.959    13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:20:32.959    13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:20:32.959    13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0
00:20:32.959   13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64
00:20:32.959   13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:20:32.959   13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=()
00:20:32.959   13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit
00:20:32.959   13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z rdma ']'
00:20:32.959   13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:20:32.959   13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs
00:20:32.959   13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no
00:20:32.959   13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns
00:20:32.959   13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:20:32.959   13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:20:32.959    13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:20:32.959   13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:20:32.959   13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:20:32.959   13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable
00:20:32.959   13:46:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=()
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=()
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=()
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=()
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=()
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=()
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=()
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ rdma == rdma ]]
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}")
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}")
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]]
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}")
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)'
00:20:39.518  Found 0000:d9:00.0 (0x15b3 - 0x1015)
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)'
00:20:39.518  Found 0000:d9:00.1 (0x15b3 - 0x1015)
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]]
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0'
00:20:39.518  Found net devices under 0000:d9:00.0: mlx_0_0
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1'
00:20:39.518  Found net devices under 0000:d9:00.1: mlx_0_1
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ rdma == tcp ]]
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@447 -- # [[ rdma == rdma ]]
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # rdma_device_init
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@529 -- # load_ib_rdma_modules
00:20:39.518    13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@62 -- # uname
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']'
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@66 -- # modprobe ib_cm
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@67 -- # modprobe ib_core
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@68 -- # modprobe ib_umad
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@69 -- # modprobe ib_uverbs
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@70 -- # modprobe iw_cm
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@71 -- # modprobe rdma_cm
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@72 -- # modprobe rdma_ucm
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@530 -- # allocate_nic_ips
00:20:39.518   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR ))
00:20:39.519    13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # get_rdma_if_list
00:20:39.519    13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:20:39.519    13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:20:39.519     13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:20:39.519     13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:20:39.519    13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:20:39.519    13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:20:39.519    13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:20:39.519    13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:20:39.519    13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_0
00:20:39.519    13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2
00:20:39.519    13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:20:39.519    13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:20:39.519    13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:20:39.519    13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:20:39.519    13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:20:39.519    13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_1
00:20:39.519    13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2
00:20:39.519   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:20:39.519    13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0
00:20:39.519    13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:20:39.519    13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:20:39.519    13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}'
00:20:39.519    13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1
00:20:39.519   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # ip=192.168.100.8
00:20:39.519   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]]
00:20:39.519   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@85 -- # ip addr show mlx_0_0
00:20:39.519  6: mlx_0_0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:20:39.519      link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff
00:20:39.519      altname enp217s0f0np0
00:20:39.519      altname ens818f0np0
00:20:39.519      inet 192.168.100.8/24 scope global mlx_0_0
00:20:39.519         valid_lft forever preferred_lft forever
00:20:39.519   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:20:39.519    13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1
00:20:39.519    13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:20:39.519    13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:20:39.519    13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}'
00:20:39.519    13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1
00:20:39.519   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # ip=192.168.100.9
00:20:39.519   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]]
00:20:39.519   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@85 -- # ip addr show mlx_0_1
00:20:39.519  7: mlx_0_1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:20:39.519      link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff
00:20:39.519      altname enp217s0f1np1
00:20:39.519      altname ens818f1np1
00:20:39.519      inet 192.168.100.9/24 scope global mlx_0_1
00:20:39.519         valid_lft forever preferred_lft forever
00:20:39.519   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0
00:20:39.519   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:20:39.519   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma'
00:20:39.519   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]]
00:20:39.519    13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # get_available_rdma_ips
00:20:39.519     13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # get_rdma_if_list
00:20:39.519     13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:20:39.519     13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:20:39.519      13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:20:39.519      13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:20:39.519     13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:20:39.519     13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:20:39.519     13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:20:39.519     13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:20:39.519     13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_0
00:20:39.519     13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2
00:20:39.519     13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:20:39.519     13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:20:39.519     13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:20:39.519     13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:20:39.519     13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:20:39.519     13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_1
00:20:39.519     13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2
00:20:39.519    13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:20:39.519    13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0
00:20:39.519    13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:20:39.519    13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:20:39.519    13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}'
00:20:39.519    13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1
00:20:39.519    13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:20:39.519    13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1
00:20:39.519    13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:20:39.519    13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:20:39.519    13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}'
00:20:39.519    13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1
00:20:39.519   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8
00:20:39.519  192.168.100.9'
00:20:39.519    13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # echo '192.168.100.8
00:20:39.519  192.168.100.9'
00:20:39.519    13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # head -n 1
00:20:39.519   13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8
00:20:39.519    13:46:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # echo '192.168.100.8
00:20:39.519  192.168.100.9'
00:20:39.519    13:46:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # tail -n +2
00:20:39.519    13:46:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # head -n 1
00:20:39.519   13:46:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9
00:20:39.519   13:46:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']'
00:20:39.519   13:46:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024'
00:20:39.519   13:46:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' rdma == tcp ']'
00:20:39.519   13:46:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' rdma == rdma ']'
00:20:39.519   13:46:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-rdma
00:20:39.519   13:46:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF
00:20:39.519   13:46:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:20:39.519   13:46:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable
00:20:39.519   13:46:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x
00:20:39.519   13:46:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=3340107
00:20:39.519   13:46:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 3340107
00:20:39.519   13:46:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 3340107 ']'
00:20:39.519   13:46:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:20:39.519   13:46:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100
00:20:39.519   13:46:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:20:39.519  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:20:39.519   13:46:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable
00:20:39.519   13:46:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x
00:20:39.519   13:46:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:20:39.519  [2024-12-14 13:46:39.125128] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:20:39.519  [2024-12-14 13:46:39.125229] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:20:39.777  [2024-12-14 13:46:39.258725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:20:39.777  [2024-12-14 13:46:39.359611] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:20:39.777  [2024-12-14 13:46:39.359658] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:20:39.778  [2024-12-14 13:46:39.359670] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:20:39.778  [2024-12-14 13:46:39.359683] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:20:39.778  [2024-12-14 13:46:39.359691] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:20:39.778  [2024-12-14 13:46:39.362090] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:20:39.778  [2024-12-14 13:46:39.362164] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:20:39.778  [2024-12-14 13:46:39.362224] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:20:39.778  [2024-12-14 13:46:39.362236] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:20:40.343   13:46:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:20:40.343   13:46:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0
00:20:40.343   13:46:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:20:40.343   13:46:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable
00:20:40.343   13:46:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x
00:20:40.343   13:46:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:20:40.343   13:46:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192
00:20:40.343   13:46:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:40.344   13:46:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x
00:20:40.344  [2024-12-14 13:46:40.022570] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028540/0x7f415830f940) succeed.
00:20:40.344  [2024-12-14 13:46:40.032046] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000286c0/0x7f41579bd940) succeed.
00:20:40.602   13:46:40 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:40.602   13:46:40 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0
00:20:40.602   13:46:40 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:40.602   13:46:40 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x
00:20:40.860  Malloc0
00:20:40.860   13:46:40 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:40.860   13:46:40 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1
00:20:40.860   13:46:40 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:40.860   13:46:40 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x
00:20:40.860  Malloc1
00:20:40.860   13:46:40 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:40.860   13:46:40 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291
00:20:40.860   13:46:40 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:40.860   13:46:40 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x
00:20:40.860   13:46:40 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:40.860   13:46:40 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:20:40.860   13:46:40 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:40.860   13:46:40 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x
00:20:40.860   13:46:40 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:40.860   13:46:40 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1
00:20:40.860   13:46:40 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:40.860   13:46:40 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x
00:20:40.860   13:46:40 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:40.860   13:46:40 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420
00:20:40.860   13:46:40 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:40.860   13:46:40 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x
00:20:40.860  [2024-12-14 13:46:40.461806] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 ***
00:20:40.860   13:46:40 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:40.860   13:46:40 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420
00:20:40.860   13:46:40 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:40.860   13:46:40 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x
00:20:40.860   13:46:40 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:40.860   13:46:40 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 4420
00:20:40.860  
00:20:40.860  Discovery Log Number of Records 2, Generation counter 2
00:20:40.860  =====Discovery Log Entry 0======
00:20:40.860  trtype:  rdma
00:20:40.861  adrfam:  ipv4
00:20:40.861  subtype: current discovery subsystem
00:20:40.861  treq:    not required
00:20:40.861  portid:  0
00:20:40.861  trsvcid: 4420
00:20:40.861  subnqn:  nqn.2014-08.org.nvmexpress.discovery
00:20:40.861  traddr:  192.168.100.8
00:20:40.861  eflags:  explicit discovery connections, duplicate discovery information
00:20:40.861  rdma_prtype: not specified
00:20:40.861  rdma_qptype: connected
00:20:40.861  rdma_cms:    rdma-cm
00:20:40.861  rdma_pkey: 0x0000
00:20:40.861  =====Discovery Log Entry 1======
00:20:40.861  trtype:  rdma
00:20:40.861  adrfam:  ipv4
00:20:40.861  subtype: nvme subsystem
00:20:40.861  treq:    not required
00:20:40.861  portid:  0
00:20:40.861  trsvcid: 4420
00:20:40.861  subnqn:  nqn.2016-06.io.spdk:cnode1
00:20:40.861  traddr:  192.168.100.8
00:20:40.861  eflags:  none
00:20:40.861  rdma_prtype: not specified
00:20:40.861  rdma_qptype: connected
00:20:40.861  rdma_cms:    rdma-cm
00:20:40.861  rdma_pkey: 0x0000
00:20:40.861   13:46:40 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs))
00:20:40.861    13:46:40 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs
00:20:40.861    13:46:40 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _
00:20:40.861    13:46:40 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _
00:20:40.861     13:46:40 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list
00:20:40.861    13:46:40 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]]
00:20:40.861    13:46:40 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _
00:20:40.861    13:46:40 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]]
00:20:40.861    13:46:40 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _
00:20:40.861   13:46:40 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0
00:20:40.861   13:46:40 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420
00:20:42.234   13:46:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2
00:20:42.234   13:46:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0
00:20:42.234   13:46:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:20:42.234   13:46:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]]
00:20:42.234   13:46:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2
00:20:42.234   13:46:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2
00:20:44.133   13:46:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:20:44.133    13:46:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:20:44.133    13:46:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:20:44.133   13:46:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2
00:20:44.133   13:46:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:20:44.133   13:46:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0
00:20:44.133    13:46:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs
00:20:44.133    13:46:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _
00:20:44.133    13:46:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _
00:20:44.133     13:46:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list
00:20:44.133    13:46:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]]
00:20:44.133    13:46:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _
00:20:44.133    13:46:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]]
00:20:44.133    13:46:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _
00:20:44.133    13:46:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]]
00:20:44.133    13:46:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1
00:20:44.133    13:46:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _
00:20:44.133    13:46:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]]
00:20:44.133    13:46:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2
00:20:44.133    13:46:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _
00:20:44.133   13:46:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1
00:20:44.133  /dev/nvme0n2 ]]
00:20:44.133   13:46:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs))
00:20:44.133    13:46:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs
00:20:44.133    13:46:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _
00:20:44.133    13:46:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _
00:20:44.133     13:46:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list
00:20:44.133    13:46:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]]
00:20:44.133    13:46:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _
00:20:44.133    13:46:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]]
00:20:44.133    13:46:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _
00:20:44.133    13:46:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]]
00:20:44.133    13:46:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1
00:20:44.133    13:46:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _
00:20:44.133    13:46:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]]
00:20:44.133    13:46:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2
00:20:44.133    13:46:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _
00:20:44.133   13:46:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2
00:20:44.133   13:46:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:20:45.067  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:20:45.067   13:46:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:20:45.067   13:46:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0
00:20:45.067   13:46:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:20:45.067   13:46:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME
00:20:45.067   13:46:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:20:45.067   13:46:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME
00:20:45.067   13:46:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0
00:20:45.067   13:46:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection ))
00:20:45.067   13:46:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:20:45.067   13:46:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:45.067   13:46:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x
00:20:45.067   13:46:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:45.067   13:46:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT
00:20:45.067   13:46:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini
00:20:45.067   13:46:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup
00:20:45.067   13:46:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync
00:20:45.067   13:46:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' rdma == tcp ']'
00:20:45.067   13:46:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' rdma == rdma ']'
00:20:45.067   13:46:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e
00:20:45.067   13:46:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20}
00:20:45.067   13:46:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma
00:20:45.067  rmmod nvme_rdma
00:20:45.067  rmmod nvme_fabrics
00:20:45.067   13:46:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:20:45.067   13:46:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e
00:20:45.067   13:46:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0
00:20:45.067   13:46:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 3340107 ']'
00:20:45.067   13:46:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 3340107
00:20:45.067   13:46:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 3340107 ']'
00:20:45.067   13:46:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 3340107
00:20:45.067    13:46:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname
00:20:45.067   13:46:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:20:45.067    13:46:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3340107
00:20:45.067   13:46:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:20:45.067   13:46:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:20:45.067   13:46:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3340107'
00:20:45.067  killing process with pid 3340107
00:20:45.067   13:46:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 3340107
00:20:45.067   13:46:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 3340107
00:20:47.597   13:46:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:20:47.597   13:46:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]]
00:20:47.597  
00:20:47.597  real	0m14.338s
00:20:47.597  user	0m29.538s
00:20:47.597  sys	0m5.749s
00:20:47.597   13:46:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable
00:20:47.597   13:46:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x
00:20:47.597  ************************************
00:20:47.597  END TEST nvmf_nvme_cli
00:20:47.597  ************************************
00:20:47.597   13:46:46 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]]
00:20:47.597   13:46:46 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma
00:20:47.597   13:46:46 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:20:47.597   13:46:46 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:20:47.597   13:46:46 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:20:47.597  ************************************
00:20:47.597  START TEST nvmf_auth_target
00:20:47.597  ************************************
00:20:47.597   13:46:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma
00:20:47.597  * Looking for test storage...
00:20:47.597  * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target
00:20:47.597    13:46:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:20:47.597     13:46:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version
00:20:47.597     13:46:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:20:47.597    13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:20:47.597    13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:20:47.597    13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l
00:20:47.597    13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l
00:20:47.597    13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-:
00:20:47.597    13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1
00:20:47.597    13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-:
00:20:47.597    13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2
00:20:47.597    13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<'
00:20:47.597    13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2
00:20:47.597    13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1
00:20:47.597    13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:20:47.597    13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in
00:20:47.597    13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1
00:20:47.597    13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 ))
00:20:47.597    13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:20:47.597     13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1
00:20:47.597     13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1
00:20:47.597     13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:20:47.597     13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1
00:20:47.597    13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1
00:20:47.597     13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2
00:20:47.597     13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2
00:20:47.597     13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:20:47.597     13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2
00:20:47.597    13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2
00:20:47.597    13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:20:47.597    13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:20:47.597    13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0
00:20:47.597    13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:20:47.597    13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:20:47.597  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:47.597  		--rc genhtml_branch_coverage=1
00:20:47.597  		--rc genhtml_function_coverage=1
00:20:47.597  		--rc genhtml_legend=1
00:20:47.597  		--rc geninfo_all_blocks=1
00:20:47.597  		--rc geninfo_unexecuted_blocks=1
00:20:47.597  		
00:20:47.597  		'
00:20:47.597    13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:20:47.597  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:47.597  		--rc genhtml_branch_coverage=1
00:20:47.597  		--rc genhtml_function_coverage=1
00:20:47.597  		--rc genhtml_legend=1
00:20:47.597  		--rc geninfo_all_blocks=1
00:20:47.597  		--rc geninfo_unexecuted_blocks=1
00:20:47.597  		
00:20:47.597  		'
00:20:47.597    13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:20:47.597  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:47.597  		--rc genhtml_branch_coverage=1
00:20:47.597  		--rc genhtml_function_coverage=1
00:20:47.597  		--rc genhtml_legend=1
00:20:47.597  		--rc geninfo_all_blocks=1
00:20:47.597  		--rc geninfo_unexecuted_blocks=1
00:20:47.597  		
00:20:47.597  		'
00:20:47.597    13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:20:47.597  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:47.597  		--rc genhtml_branch_coverage=1
00:20:47.597  		--rc genhtml_function_coverage=1
00:20:47.597  		--rc genhtml_legend=1
00:20:47.597  		--rc geninfo_all_blocks=1
00:20:47.597  		--rc geninfo_unexecuted_blocks=1
00:20:47.597  		
00:20:47.597  		'
00:20:47.597   13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh
00:20:47.597     13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s
00:20:47.597    13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:20:47.597    13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:20:47.597    13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:20:47.597    13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:20:47.597    13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:20:47.597    13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:20:47.597    13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:20:47.597    13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:20:47.597    13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:20:47.597     13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:20:47.597    13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:20:47.597    13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e
00:20:47.597    13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:20:47.597    13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:20:47.597    13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:20:47.597    13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:20:47.597    13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh
00:20:47.597     13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob
00:20:47.597     13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:20:47.597     13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:20:47.597     13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:20:47.597      13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:20:47.597      13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:20:47.598      13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:20:47.598      13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH
00:20:47.598      13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:20:47.598    13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0
00:20:47.598    13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:20:47.598    13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:20:47.598    13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:20:47.598    13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:20:47.598    13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:20:47.598    13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:20:47.598  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:20:47.598    13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:20:47.598    13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:20:47.598    13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0
00:20:47.598   13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512")
00:20:47.598   13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192")
00:20:47.598   13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0
00:20:47.598   13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:20:47.598   13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock
00:20:47.598   13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=()
00:20:47.598   13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=()
00:20:47.598   13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit
00:20:47.598   13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z rdma ']'
00:20:47.598   13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:20:47.598   13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs
00:20:47.598   13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no
00:20:47.598   13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns
00:20:47.598   13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:20:47.598   13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:20:47.598    13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:20:47.598   13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:20:47.598   13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:20:47.598   13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable
00:20:47.598   13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:54.159   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:20:54.159   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=()
00:20:54.159   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs
00:20:54.159   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=()
00:20:54.159   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:20:54.159   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=()
00:20:54.159   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers
00:20:54.159   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=()
00:20:54.159   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs
00:20:54.159   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=()
00:20:54.159   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810
00:20:54.159   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=()
00:20:54.159   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722
00:20:54.159   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=()
00:20:54.159   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx
00:20:54.159   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:20:54.159   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:20:54.159   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:20:54.159   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:20:54.159   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:20:54.159   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:20:54.159   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:20:54.159   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:20:54.159   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:20:54.159   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:20:54.159   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:20:54.159   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:20:54.159   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:20:54.159   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]]
00:20:54.159   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}")
00:20:54.159   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}")
00:20:54.159   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]]
00:20:54.159   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}")
00:20:54.159   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:20:54.159   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:20:54.159   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)'
00:20:54.159  Found 0000:d9:00.0 (0x15b3 - 0x1015)
00:20:54.159   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:20:54.159   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:20:54.159   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:20:54.159   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:20:54.159   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:20:54.159   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:20:54.159   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:20:54.159   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)'
00:20:54.159  Found 0000:d9:00.1 (0x15b3 - 0x1015)
00:20:54.159   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:20:54.159   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:20:54.159   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:20:54.159   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:20:54.159   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:20:54.159   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:20:54.159   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:20:54.159   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]]
00:20:54.159   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:20:54.159   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:20:54.159   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:20:54.159   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:20:54.159   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:20:54.159   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0'
00:20:54.159  Found net devices under 0000:d9:00.0: mlx_0_0
00:20:54.159   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:20:54.159   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:20:54.159   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:20:54.159   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:20:54.160   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:20:54.160   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:20:54.160   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1'
00:20:54.160  Found net devices under 0000:d9:00.1: mlx_0_1
00:20:54.160   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:20:54.160   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:20:54.160   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes
00:20:54.160   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:20:54.160   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ rdma == tcp ]]
00:20:54.160   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@447 -- # [[ rdma == rdma ]]
00:20:54.160   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # rdma_device_init
00:20:54.160   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # load_ib_rdma_modules
00:20:54.160    13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@62 -- # uname
00:20:54.160   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']'
00:20:54.160   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@66 -- # modprobe ib_cm
00:20:54.160   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@67 -- # modprobe ib_core
00:20:54.160   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@68 -- # modprobe ib_umad
00:20:54.160   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs
00:20:54.160   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@70 -- # modprobe iw_cm
00:20:54.160   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@71 -- # modprobe rdma_cm
00:20:54.160   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm
00:20:54.160   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # allocate_nic_ips
00:20:54.160   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR ))
00:20:54.160    13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # get_rdma_if_list
00:20:54.160    13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:20:54.160    13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:20:54.160     13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:20:54.160     13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:20:54.160    13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:20:54.160    13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:20:54.160    13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:20:54.160    13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:20:54.160    13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_0
00:20:54.160    13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2
00:20:54.160    13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:20:54.160    13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:20:54.160    13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:20:54.160    13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:20:54.160    13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:20:54.160    13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_1
00:20:54.160    13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2
00:20:54.160   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:20:54.160    13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0
00:20:54.160    13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:20:54.160    13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:20:54.160    13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}'
00:20:54.160    13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1
00:20:54.160   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # ip=192.168.100.8
00:20:54.160   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]]
00:20:54.160   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0
00:20:54.160  6: mlx_0_0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:20:54.160      link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff
00:20:54.160      altname enp217s0f0np0
00:20:54.160      altname ens818f0np0
00:20:54.160      inet 192.168.100.8/24 scope global mlx_0_0
00:20:54.160         valid_lft forever preferred_lft forever
00:20:54.160   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:20:54.160    13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1
00:20:54.160    13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:20:54.160    13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:20:54.160    13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}'
00:20:54.160    13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1
00:20:54.160   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # ip=192.168.100.9
00:20:54.160   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]]
00:20:54.160   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1
00:20:54.160  7: mlx_0_1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:20:54.160      link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff
00:20:54.160      altname enp217s0f1np1
00:20:54.160      altname ens818f1np1
00:20:54.160      inet 192.168.100.9/24 scope global mlx_0_1
00:20:54.160         valid_lft forever preferred_lft forever
00:20:54.160   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0
00:20:54.160   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:20:54.160   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma'
00:20:54.160   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]]
00:20:54.160    13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # get_available_rdma_ips
00:20:54.160     13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # get_rdma_if_list
00:20:54.160     13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:20:54.160     13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:20:54.160      13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:20:54.160      13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:20:54.160     13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:20:54.160     13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:20:54.160     13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:20:54.160     13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:20:54.160     13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_0
00:20:54.160     13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2
00:20:54.160     13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:20:54.160     13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:20:54.160     13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:20:54.160     13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:20:54.160     13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:20:54.160     13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_1
00:20:54.160     13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2
00:20:54.160    13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:20:54.160    13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0
00:20:54.160    13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:20:54.160    13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:20:54.160    13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}'
00:20:54.160    13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1
00:20:54.160    13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:20:54.160    13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1
00:20:54.160    13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:20:54.160    13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:20:54.160    13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}'
00:20:54.160    13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1
00:20:54.160   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8
00:20:54.160  192.168.100.9'
00:20:54.160    13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # echo '192.168.100.8
00:20:54.160  192.168.100.9'
00:20:54.160    13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # head -n 1
00:20:54.160   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8
00:20:54.160    13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # echo '192.168.100.8
00:20:54.160  192.168.100.9'
00:20:54.160    13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # tail -n +2
00:20:54.160    13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # head -n 1
00:20:54.160   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9
00:20:54.160   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']'
00:20:54.160   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024'
00:20:54.160   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' rdma == tcp ']'
00:20:54.161   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' rdma == rdma ']'
00:20:54.161   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-rdma
00:20:54.161   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth
00:20:54.161   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:20:54.161   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable
00:20:54.161   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:54.161   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3344750
00:20:54.161   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth
00:20:54.161   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3344750
00:20:54.161   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3344750 ']'
00:20:54.161   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:20:54.161   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100
00:20:54.161   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:20:54.161   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable
00:20:54.161   13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:55.155   13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:20:55.155   13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0
00:20:55.155   13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:20:55.155   13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable
00:20:55.155   13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:55.155   13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:20:55.155   13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=3344899
00:20:55.155   13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth
00:20:55.155   13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT
00:20:55.155    13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48
00:20:55.155    13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key
00:20:55.155    13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3')
00:20:55.155    13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests
00:20:55.155    13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null
00:20:55.155    13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48
00:20:55.155     13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom
00:20:55.155    13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f301ff5b9bf5447d56ff2c38e41b73dbac70e16d0f919134
00:20:55.155     13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX
00:20:55.155    13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.oNa
00:20:55.155    13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f301ff5b9bf5447d56ff2c38e41b73dbac70e16d0f919134 0
00:20:55.155    13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f301ff5b9bf5447d56ff2c38e41b73dbac70e16d0f919134 0
00:20:55.155    13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest
00:20:55.155    13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1
00:20:55.155    13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f301ff5b9bf5447d56ff2c38e41b73dbac70e16d0f919134
00:20:55.155    13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0
00:20:55.155    13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python -
00:20:55.155    13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.oNa
00:20:55.155    13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.oNa
00:20:55.155   13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.oNa
00:20:55.155    13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64
00:20:55.155    13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key
00:20:55.155    13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3')
00:20:55.155    13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests
00:20:55.155    13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512
00:20:55.155    13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64
00:20:55.155     13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom
00:20:55.155    13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=022a20d71089168a445f0c19c2dba54e7171428a7e12a009783fcd9199049cf9
00:20:55.155     13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX
00:20:55.155    13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.gsB
00:20:55.155    13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 022a20d71089168a445f0c19c2dba54e7171428a7e12a009783fcd9199049cf9 3
00:20:55.155    13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 022a20d71089168a445f0c19c2dba54e7171428a7e12a009783fcd9199049cf9 3
00:20:55.155    13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest
00:20:55.155    13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1
00:20:55.155    13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=022a20d71089168a445f0c19c2dba54e7171428a7e12a009783fcd9199049cf9
00:20:55.155    13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3
00:20:55.155    13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python -
00:20:55.413    13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.gsB
00:20:55.413    13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.gsB
00:20:55.413   13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.gsB
00:20:55.413    13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32
00:20:55.413    13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key
00:20:55.414    13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3')
00:20:55.414    13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests
00:20:55.414    13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256
00:20:55.414    13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32
00:20:55.414     13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom
00:20:55.414    13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2889eddb0ddbd3e6cddddbf254ce1ad6
00:20:55.414     13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX
00:20:55.414    13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.495
00:20:55.414    13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2889eddb0ddbd3e6cddddbf254ce1ad6 1
00:20:55.414    13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2889eddb0ddbd3e6cddddbf254ce1ad6 1
00:20:55.414    13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest
00:20:55.414    13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1
00:20:55.414    13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2889eddb0ddbd3e6cddddbf254ce1ad6
00:20:55.414    13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1
00:20:55.414    13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python -
00:20:55.414    13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.495
00:20:55.414    13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.495
00:20:55.414   13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.495
00:20:55.414    13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48
00:20:55.414    13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key
00:20:55.414    13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3')
00:20:55.414    13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests
00:20:55.414    13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384
00:20:55.414    13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48
00:20:55.414     13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom
00:20:55.414    13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f6734df2c039321353f9b97365a653204aecaa0120f3d3d5
00:20:55.414     13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX
00:20:55.414    13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.L66
00:20:55.414    13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f6734df2c039321353f9b97365a653204aecaa0120f3d3d5 2
00:20:55.414    13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f6734df2c039321353f9b97365a653204aecaa0120f3d3d5 2
00:20:55.414    13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest
00:20:55.414    13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1
00:20:55.414    13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f6734df2c039321353f9b97365a653204aecaa0120f3d3d5
00:20:55.414    13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2
00:20:55.414    13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python -
00:20:55.414    13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.L66
00:20:55.414    13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.L66
00:20:55.414   13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.L66
00:20:55.414    13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48
00:20:55.414    13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key
00:20:55.414    13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3')
00:20:55.414    13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests
00:20:55.414    13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384
00:20:55.414    13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48
00:20:55.414     13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom
00:20:55.414    13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b381e93f3b5cd431703f139b47bb748492d490b80b76a720
00:20:55.414     13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX
00:20:55.414    13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Uv3
00:20:55.414    13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b381e93f3b5cd431703f139b47bb748492d490b80b76a720 2
00:20:55.414    13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b381e93f3b5cd431703f139b47bb748492d490b80b76a720 2
00:20:55.414    13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest
00:20:55.414    13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1
00:20:55.414    13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b381e93f3b5cd431703f139b47bb748492d490b80b76a720
00:20:55.414    13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2
00:20:55.414    13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python -
00:20:55.414    13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Uv3
00:20:55.414    13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Uv3
00:20:55.414   13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.Uv3
00:20:55.414    13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32
00:20:55.414    13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key
00:20:55.414    13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3')
00:20:55.414    13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests
00:20:55.414    13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256
00:20:55.414    13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32
00:20:55.414     13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom
00:20:55.414    13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=811569a088358cf7eb31423698df9710
00:20:55.414     13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX
00:20:55.414    13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.suE
00:20:55.414    13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 811569a088358cf7eb31423698df9710 1
00:20:55.414    13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 811569a088358cf7eb31423698df9710 1
00:20:55.414    13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest
00:20:55.414    13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1
00:20:55.414    13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=811569a088358cf7eb31423698df9710
00:20:55.414    13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1
00:20:55.414    13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python -
00:20:55.672    13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.suE
00:20:55.672    13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.suE
00:20:55.672   13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.suE
00:20:55.672    13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64
00:20:55.672    13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key
00:20:55.672    13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3')
00:20:55.672    13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests
00:20:55.672    13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512
00:20:55.672    13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64
00:20:55.672     13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom
00:20:55.672    13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=812fa5bba7d5b110a65715bd6702a44c61d30456a61c0fbec18938393f6c6402
00:20:55.672     13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX
00:20:55.672    13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.jIm
00:20:55.672    13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 812fa5bba7d5b110a65715bd6702a44c61d30456a61c0fbec18938393f6c6402 3
00:20:55.672    13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 812fa5bba7d5b110a65715bd6702a44c61d30456a61c0fbec18938393f6c6402 3
00:20:55.672    13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest
00:20:55.672    13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1
00:20:55.672    13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=812fa5bba7d5b110a65715bd6702a44c61d30456a61c0fbec18938393f6c6402
00:20:55.672    13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3
00:20:55.672    13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python -
00:20:55.672    13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.jIm
00:20:55.672    13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.jIm
00:20:55.672   13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.jIm
00:20:55.672   13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]=
00:20:55.672   13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 3344750
00:20:55.672   13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3344750 ']'
00:20:55.672   13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:20:55.672   13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100
00:20:55.672   13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:20:55.672  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:20:55.672   13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable
00:20:55.672   13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:55.930   13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:20:55.930   13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0
00:20:55.930   13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 3344899 /var/tmp/host.sock
00:20:55.930   13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3344899 ']'
00:20:55.930   13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock
00:20:55.930   13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100
00:20:55.930   13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...'
00:20:55.930  Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...
00:20:55.930   13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable
00:20:55.930   13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:56.189   13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:20:56.189   13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0
00:20:56.189   13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd
00:20:56.189   13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:56.189   13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:56.446   13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:56.446   13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}"
00:20:56.446   13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.oNa
00:20:56.446   13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:56.446   13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:56.446   13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:56.446   13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.oNa
00:20:56.446   13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.oNa
00:20:56.447   13:46:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.gsB ]]
00:20:56.447   13:46:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.gsB
00:20:56.447   13:46:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:56.447   13:46:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:56.705   13:46:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:56.705   13:46:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.gsB
00:20:56.705   13:46:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.gsB
00:20:56.705   13:46:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}"
00:20:56.705   13:46:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.495
00:20:56.705   13:46:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:56.705   13:46:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:56.705   13:46:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:56.705   13:46:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.495
00:20:56.705   13:46:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.495
00:20:56.963   13:46:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.L66 ]]
00:20:56.963   13:46:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.L66
00:20:56.963   13:46:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:56.963   13:46:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:56.963   13:46:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:56.963   13:46:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.L66
00:20:56.963   13:46:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.L66
00:20:57.221   13:46:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}"
00:20:57.221   13:46:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Uv3
00:20:57.221   13:46:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:57.221   13:46:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:57.221   13:46:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:57.221   13:46:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Uv3
00:20:57.221   13:46:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Uv3
00:20:57.221   13:46:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.suE ]]
00:20:57.221   13:46:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.suE
00:20:57.221   13:46:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:57.221   13:46:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:57.221   13:46:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:57.221   13:46:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.suE
00:20:57.221   13:46:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.suE
00:20:57.479   13:46:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}"
00:20:57.479   13:46:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.jIm
00:20:57.479   13:46:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:57.479   13:46:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:57.479   13:46:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:57.479   13:46:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.jIm
00:20:57.479   13:46:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.jIm
00:20:57.737   13:46:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]]
00:20:57.737   13:46:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}"
00:20:57.737   13:46:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:20:57.737   13:46:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:20:57.737   13:46:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null
00:20:57.737   13:46:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null
00:20:57.995   13:46:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0
00:20:57.995   13:46:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:20:57.995   13:46:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:20:57.995   13:46:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null
00:20:57.995   13:46:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:20:57.995   13:46:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:20:57.995   13:46:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:20:57.995   13:46:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:57.995   13:46:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:57.995   13:46:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:57.995   13:46:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:20:57.995   13:46:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:20:57.995   13:46:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:20:58.253  
00:20:58.253    13:46:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:20:58.253    13:46:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:20:58.253    13:46:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:20:58.253   13:46:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:20:58.253    13:46:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:20:58.253    13:46:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:58.253    13:46:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:58.511    13:46:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:58.511   13:46:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:20:58.511  {
00:20:58.511  "cntlid": 1,
00:20:58.511  "qid": 0,
00:20:58.511  "state": "enabled",
00:20:58.511  "thread": "nvmf_tgt_poll_group_000",
00:20:58.511  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:20:58.511  "listen_address": {
00:20:58.511  "trtype": "RDMA",
00:20:58.511  "adrfam": "IPv4",
00:20:58.511  "traddr": "192.168.100.8",
00:20:58.511  "trsvcid": "4420"
00:20:58.511  },
00:20:58.511  "peer_address": {
00:20:58.511  "trtype": "RDMA",
00:20:58.511  "adrfam": "IPv4",
00:20:58.511  "traddr": "192.168.100.8",
00:20:58.511  "trsvcid": "59515"
00:20:58.511  },
00:20:58.511  "auth": {
00:20:58.511  "state": "completed",
00:20:58.511  "digest": "sha256",
00:20:58.511  "dhgroup": "null"
00:20:58.511  }
00:20:58.511  }
00:20:58.511  ]'
00:20:58.511    13:46:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:20:58.511   13:46:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:20:58.511    13:46:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:20:58.511   13:46:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]]
00:20:58.511    13:46:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:20:58.511   13:46:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:20:58.511   13:46:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:20:58.511   13:46:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:20:58.769   13:46:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjMwMWZmNWI5YmY1NDQ3ZDU2ZmYyYzM4ZTQxYjczZGJhYzcwZTE2ZDBmOTE5MTM0FgVPJA==: --dhchap-ctrl-secret DHHC-1:03:MDIyYTIwZDcxMDg5MTY4YTQ0NWYwYzE5YzJkYmE1NGU3MTcxNDI4YTdlMTJhMDA5NzgzZmNkOTE5OTA0OWNmOY86vbo=:
00:20:58.769   13:46:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:ZjMwMWZmNWI5YmY1NDQ3ZDU2ZmYyYzM4ZTQxYjczZGJhYzcwZTE2ZDBmOTE5MTM0FgVPJA==: --dhchap-ctrl-secret DHHC-1:03:MDIyYTIwZDcxMDg5MTY4YTQ0NWYwYzE5YzJkYmE1NGU3MTcxNDI4YTdlMTJhMDA5NzgzZmNkOTE5OTA0OWNmOY86vbo=:
00:20:59.335   13:46:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:20:59.335  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:20:59.335   13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:20:59.335   13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:59.335   13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:59.335   13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:59.335   13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:20:59.335   13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null
00:20:59.335   13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null
00:20:59.593   13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1
00:20:59.593   13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:20:59.594   13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:20:59.594   13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null
00:20:59.594   13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:20:59.594   13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:20:59.594   13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:20:59.594   13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:59.594   13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:59.594   13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:59.594   13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:20:59.594   13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:20:59.594   13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:20:59.852  
00:20:59.852    13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:20:59.852    13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:20:59.852    13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:00.110   13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:00.110    13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:00.110    13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:00.110    13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:00.110    13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:00.110   13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:00.110  {
00:21:00.110  "cntlid": 3,
00:21:00.110  "qid": 0,
00:21:00.110  "state": "enabled",
00:21:00.110  "thread": "nvmf_tgt_poll_group_000",
00:21:00.110  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:21:00.110  "listen_address": {
00:21:00.110  "trtype": "RDMA",
00:21:00.110  "adrfam": "IPv4",
00:21:00.110  "traddr": "192.168.100.8",
00:21:00.110  "trsvcid": "4420"
00:21:00.110  },
00:21:00.110  "peer_address": {
00:21:00.110  "trtype": "RDMA",
00:21:00.110  "adrfam": "IPv4",
00:21:00.110  "traddr": "192.168.100.8",
00:21:00.110  "trsvcid": "37906"
00:21:00.110  },
00:21:00.110  "auth": {
00:21:00.110  "state": "completed",
00:21:00.110  "digest": "sha256",
00:21:00.110  "dhgroup": "null"
00:21:00.110  }
00:21:00.110  }
00:21:00.110  ]'
00:21:00.110    13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:00.110   13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:21:00.110    13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:00.110   13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]]
00:21:00.110    13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:00.369   13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:00.369   13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:00.369   13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:00.369   13:47:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg4OWVkZGIwZGRiZDNlNmNkZGRkYmYyNTRjZTFhZDaPKOr9: --dhchap-ctrl-secret DHHC-1:02:ZjY3MzRkZjJjMDM5MzIxMzUzZjliOTczNjVhNjUzMjA0YWVjYWEwMTIwZjNkM2Q17wM3AA==:
00:21:00.369   13:47:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:Mjg4OWVkZGIwZGRiZDNlNmNkZGRkYmYyNTRjZTFhZDaPKOr9: --dhchap-ctrl-secret DHHC-1:02:ZjY3MzRkZjJjMDM5MzIxMzUzZjliOTczNjVhNjUzMjA0YWVjYWEwMTIwZjNkM2Q17wM3AA==:
00:21:01.303   13:47:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:01.303  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:01.303   13:47:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:21:01.303   13:47:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:01.303   13:47:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:01.303   13:47:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:01.303   13:47:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:01.303   13:47:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null
00:21:01.303   13:47:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null
00:21:01.303   13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2
00:21:01.303   13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:01.303   13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:21:01.303   13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null
00:21:01.303   13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:21:01.303   13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:01.303   13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:01.303   13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:01.303   13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:01.303   13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:01.303   13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:01.303   13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:01.304   13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:01.563  
00:21:01.563    13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:01.563    13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:01.563    13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:01.821   13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:01.821    13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:01.821    13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:01.821    13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:01.821    13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:01.821   13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:01.821  {
00:21:01.821  "cntlid": 5,
00:21:01.821  "qid": 0,
00:21:01.821  "state": "enabled",
00:21:01.821  "thread": "nvmf_tgt_poll_group_000",
00:21:01.821  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:21:01.821  "listen_address": {
00:21:01.821  "trtype": "RDMA",
00:21:01.821  "adrfam": "IPv4",
00:21:01.821  "traddr": "192.168.100.8",
00:21:01.821  "trsvcid": "4420"
00:21:01.821  },
00:21:01.821  "peer_address": {
00:21:01.821  "trtype": "RDMA",
00:21:01.821  "adrfam": "IPv4",
00:21:01.821  "traddr": "192.168.100.8",
00:21:01.821  "trsvcid": "47246"
00:21:01.821  },
00:21:01.821  "auth": {
00:21:01.821  "state": "completed",
00:21:01.821  "digest": "sha256",
00:21:01.821  "dhgroup": "null"
00:21:01.821  }
00:21:01.821  }
00:21:01.821  ]'
00:21:01.821    13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:01.821   13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:21:01.821    13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:02.079   13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]]
00:21:02.079    13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:02.079   13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:02.079   13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:02.079   13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:02.337   13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjM4MWU5M2YzYjVjZDQzMTcwM2YxMzliNDdiYjc0ODQ5MmQ0OTBiODBiNzZhNzIwxZi5CA==: --dhchap-ctrl-secret DHHC-1:01:ODExNTY5YTA4ODM1OGNmN2ViMzE0MjM2OThkZjk3MTCjwNhC:
00:21:02.337   13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YjM4MWU5M2YzYjVjZDQzMTcwM2YxMzliNDdiYjc0ODQ5MmQ0OTBiODBiNzZhNzIwxZi5CA==: --dhchap-ctrl-secret DHHC-1:01:ODExNTY5YTA4ODM1OGNmN2ViMzE0MjM2OThkZjk3MTCjwNhC:
00:21:02.903   13:47:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:02.903  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:02.903   13:47:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:21:02.903   13:47:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:02.903   13:47:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:02.903   13:47:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:02.903   13:47:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:02.903   13:47:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null
00:21:02.903   13:47:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null
00:21:03.161   13:47:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3
00:21:03.161   13:47:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:03.161   13:47:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:21:03.161   13:47:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null
00:21:03.161   13:47:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:21:03.161   13:47:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:03.161   13:47:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3
00:21:03.161   13:47:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:03.161   13:47:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:03.161   13:47:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:03.161   13:47:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:21:03.161   13:47:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:21:03.161   13:47:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:21:03.419  
00:21:03.419    13:47:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:03.419    13:47:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:03.419    13:47:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:03.677   13:47:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:03.677    13:47:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:03.677    13:47:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:03.677    13:47:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:03.677    13:47:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:03.677   13:47:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:03.677  {
00:21:03.677  "cntlid": 7,
00:21:03.677  "qid": 0,
00:21:03.677  "state": "enabled",
00:21:03.677  "thread": "nvmf_tgt_poll_group_000",
00:21:03.677  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:21:03.677  "listen_address": {
00:21:03.677  "trtype": "RDMA",
00:21:03.677  "adrfam": "IPv4",
00:21:03.677  "traddr": "192.168.100.8",
00:21:03.677  "trsvcid": "4420"
00:21:03.677  },
00:21:03.677  "peer_address": {
00:21:03.677  "trtype": "RDMA",
00:21:03.677  "adrfam": "IPv4",
00:21:03.677  "traddr": "192.168.100.8",
00:21:03.677  "trsvcid": "35970"
00:21:03.677  },
00:21:03.677  "auth": {
00:21:03.677  "state": "completed",
00:21:03.677  "digest": "sha256",
00:21:03.677  "dhgroup": "null"
00:21:03.677  }
00:21:03.677  }
00:21:03.677  ]'
00:21:03.677    13:47:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:03.677   13:47:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:21:03.677    13:47:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:03.677   13:47:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]]
00:21:03.677    13:47:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:03.678   13:47:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:03.678   13:47:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:03.678   13:47:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:03.936   13:47:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODEyZmE1YmJhN2Q1YjExMGE2NTcxNWJkNjcwMmE0NGM2MWQzMDQ1NmE2MWMwZmJlYzE4OTM4MzkzZjZjNjQwMgE7H48=:
00:21:03.936   13:47:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:ODEyZmE1YmJhN2Q1YjExMGE2NTcxNWJkNjcwMmE0NGM2MWQzMDQ1NmE2MWMwZmJlYzE4OTM4MzkzZjZjNjQwMgE7H48=:
00:21:04.502   13:47:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:04.760  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:04.760   13:47:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:21:04.760   13:47:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:04.760   13:47:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:04.760   13:47:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:04.760   13:47:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:21:04.760   13:47:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:04.760   13:47:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048
00:21:04.760   13:47:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048
00:21:04.760   13:47:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0
00:21:04.760   13:47:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:04.760   13:47:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:21:04.760   13:47:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048
00:21:04.760   13:47:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:21:04.760   13:47:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:04.760   13:47:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:04.760   13:47:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:04.760   13:47:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:04.760   13:47:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:04.760   13:47:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:04.760   13:47:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:04.760   13:47:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:05.018  
00:21:05.277    13:47:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:05.277    13:47:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:05.277    13:47:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:05.277   13:47:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:05.277    13:47:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:05.277    13:47:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:05.277    13:47:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:05.277    13:47:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:05.277   13:47:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:05.277  {
00:21:05.277  "cntlid": 9,
00:21:05.277  "qid": 0,
00:21:05.277  "state": "enabled",
00:21:05.277  "thread": "nvmf_tgt_poll_group_000",
00:21:05.277  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:21:05.277  "listen_address": {
00:21:05.277  "trtype": "RDMA",
00:21:05.277  "adrfam": "IPv4",
00:21:05.277  "traddr": "192.168.100.8",
00:21:05.277  "trsvcid": "4420"
00:21:05.277  },
00:21:05.277  "peer_address": {
00:21:05.277  "trtype": "RDMA",
00:21:05.277  "adrfam": "IPv4",
00:21:05.277  "traddr": "192.168.100.8",
00:21:05.277  "trsvcid": "51586"
00:21:05.277  },
00:21:05.277  "auth": {
00:21:05.277  "state": "completed",
00:21:05.277  "digest": "sha256",
00:21:05.277  "dhgroup": "ffdhe2048"
00:21:05.277  }
00:21:05.277  }
00:21:05.277  ]'
00:21:05.277    13:47:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:05.277   13:47:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:21:05.535    13:47:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:05.535   13:47:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]]
00:21:05.535    13:47:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:05.535   13:47:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:05.535   13:47:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:05.535   13:47:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:05.793   13:47:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjMwMWZmNWI5YmY1NDQ3ZDU2ZmYyYzM4ZTQxYjczZGJhYzcwZTE2ZDBmOTE5MTM0FgVPJA==: --dhchap-ctrl-secret DHHC-1:03:MDIyYTIwZDcxMDg5MTY4YTQ0NWYwYzE5YzJkYmE1NGU3MTcxNDI4YTdlMTJhMDA5NzgzZmNkOTE5OTA0OWNmOY86vbo=:
00:21:05.793   13:47:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:ZjMwMWZmNWI5YmY1NDQ3ZDU2ZmYyYzM4ZTQxYjczZGJhYzcwZTE2ZDBmOTE5MTM0FgVPJA==: --dhchap-ctrl-secret DHHC-1:03:MDIyYTIwZDcxMDg5MTY4YTQ0NWYwYzE5YzJkYmE1NGU3MTcxNDI4YTdlMTJhMDA5NzgzZmNkOTE5OTA0OWNmOY86vbo=:
00:21:06.358   13:47:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:06.358  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:06.358   13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:21:06.358   13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:06.359   13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:06.359   13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:06.359   13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:06.359   13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048
00:21:06.359   13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048
00:21:06.617   13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1
00:21:06.617   13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:06.617   13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:21:06.617   13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048
00:21:06.617   13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:21:06.617   13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:06.617   13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:06.617   13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:06.617   13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:06.617   13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:06.617   13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:06.617   13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:06.617   13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:06.875  
00:21:06.875    13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:06.875    13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:06.875    13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:07.133   13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:07.133    13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:07.133    13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:07.133    13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:07.133    13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:07.133   13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:07.133  {
00:21:07.133  "cntlid": 11,
00:21:07.133  "qid": 0,
00:21:07.133  "state": "enabled",
00:21:07.133  "thread": "nvmf_tgt_poll_group_000",
00:21:07.133  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:21:07.133  "listen_address": {
00:21:07.133  "trtype": "RDMA",
00:21:07.133  "adrfam": "IPv4",
00:21:07.133  "traddr": "192.168.100.8",
00:21:07.133  "trsvcid": "4420"
00:21:07.133  },
00:21:07.133  "peer_address": {
00:21:07.133  "trtype": "RDMA",
00:21:07.133  "adrfam": "IPv4",
00:21:07.133  "traddr": "192.168.100.8",
00:21:07.133  "trsvcid": "55796"
00:21:07.133  },
00:21:07.133  "auth": {
00:21:07.133  "state": "completed",
00:21:07.133  "digest": "sha256",
00:21:07.133  "dhgroup": "ffdhe2048"
00:21:07.133  }
00:21:07.133  }
00:21:07.133  ]'
00:21:07.133    13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:07.133   13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:21:07.133    13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:07.133   13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]]
00:21:07.133    13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:07.133   13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:07.133   13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:07.133   13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:07.392   13:47:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg4OWVkZGIwZGRiZDNlNmNkZGRkYmYyNTRjZTFhZDaPKOr9: --dhchap-ctrl-secret DHHC-1:02:ZjY3MzRkZjJjMDM5MzIxMzUzZjliOTczNjVhNjUzMjA0YWVjYWEwMTIwZjNkM2Q17wM3AA==:
00:21:07.392   13:47:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:Mjg4OWVkZGIwZGRiZDNlNmNkZGRkYmYyNTRjZTFhZDaPKOr9: --dhchap-ctrl-secret DHHC-1:02:ZjY3MzRkZjJjMDM5MzIxMzUzZjliOTczNjVhNjUzMjA0YWVjYWEwMTIwZjNkM2Q17wM3AA==:
00:21:07.957   13:47:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:08.216  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:08.216   13:47:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:21:08.216   13:47:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:08.216   13:47:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:08.216   13:47:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:08.216   13:47:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:08.216   13:47:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048
00:21:08.216   13:47:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048
00:21:08.474   13:47:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2
00:21:08.474   13:47:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:08.474   13:47:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:21:08.474   13:47:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048
00:21:08.474   13:47:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:21:08.474   13:47:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:08.474   13:47:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:08.474   13:47:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:08.474   13:47:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:08.474   13:47:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:08.474   13:47:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:08.474   13:47:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:08.474   13:47:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:08.474  
00:21:08.732    13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:08.732    13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:08.732    13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:08.732   13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:08.732    13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:08.732    13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:08.732    13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:08.732    13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:08.732   13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:08.732  {
00:21:08.732  "cntlid": 13,
00:21:08.732  "qid": 0,
00:21:08.732  "state": "enabled",
00:21:08.732  "thread": "nvmf_tgt_poll_group_000",
00:21:08.732  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:21:08.732  "listen_address": {
00:21:08.732  "trtype": "RDMA",
00:21:08.732  "adrfam": "IPv4",
00:21:08.732  "traddr": "192.168.100.8",
00:21:08.732  "trsvcid": "4420"
00:21:08.732  },
00:21:08.732  "peer_address": {
00:21:08.732  "trtype": "RDMA",
00:21:08.732  "adrfam": "IPv4",
00:21:08.732  "traddr": "192.168.100.8",
00:21:08.732  "trsvcid": "55399"
00:21:08.732  },
00:21:08.732  "auth": {
00:21:08.732  "state": "completed",
00:21:08.732  "digest": "sha256",
00:21:08.732  "dhgroup": "ffdhe2048"
00:21:08.732  }
00:21:08.732  }
00:21:08.732  ]'
00:21:08.732    13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:08.991   13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:21:08.991    13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:08.991   13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]]
00:21:08.991    13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:08.991   13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:08.991   13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:08.991   13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:09.249   13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjM4MWU5M2YzYjVjZDQzMTcwM2YxMzliNDdiYjc0ODQ5MmQ0OTBiODBiNzZhNzIwxZi5CA==: --dhchap-ctrl-secret DHHC-1:01:ODExNTY5YTA4ODM1OGNmN2ViMzE0MjM2OThkZjk3MTCjwNhC:
00:21:09.249   13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YjM4MWU5M2YzYjVjZDQzMTcwM2YxMzliNDdiYjc0ODQ5MmQ0OTBiODBiNzZhNzIwxZi5CA==: --dhchap-ctrl-secret DHHC-1:01:ODExNTY5YTA4ODM1OGNmN2ViMzE0MjM2OThkZjk3MTCjwNhC:
00:21:09.815   13:47:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:09.815  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:09.815   13:47:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:21:09.815   13:47:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:09.815   13:47:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:09.815   13:47:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:09.815   13:47:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:09.815   13:47:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048
00:21:09.815   13:47:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048
00:21:10.073   13:47:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3
00:21:10.073   13:47:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:10.073   13:47:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:21:10.073   13:47:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048
00:21:10.073   13:47:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:21:10.073   13:47:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:10.073   13:47:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3
00:21:10.073   13:47:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:10.073   13:47:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:10.073   13:47:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:10.073   13:47:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:21:10.073   13:47:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:21:10.073   13:47:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:21:10.332  
00:21:10.332    13:47:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:10.332    13:47:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:10.332    13:47:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:10.590   13:47:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:10.590    13:47:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:10.590    13:47:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:10.590    13:47:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:10.590    13:47:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:10.590   13:47:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:10.590  {
00:21:10.590  "cntlid": 15,
00:21:10.590  "qid": 0,
00:21:10.590  "state": "enabled",
00:21:10.590  "thread": "nvmf_tgt_poll_group_000",
00:21:10.590  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:21:10.590  "listen_address": {
00:21:10.590  "trtype": "RDMA",
00:21:10.590  "adrfam": "IPv4",
00:21:10.590  "traddr": "192.168.100.8",
00:21:10.590  "trsvcid": "4420"
00:21:10.590  },
00:21:10.590  "peer_address": {
00:21:10.590  "trtype": "RDMA",
00:21:10.590  "adrfam": "IPv4",
00:21:10.590  "traddr": "192.168.100.8",
00:21:10.590  "trsvcid": "58888"
00:21:10.590  },
00:21:10.590  "auth": {
00:21:10.590  "state": "completed",
00:21:10.590  "digest": "sha256",
00:21:10.590  "dhgroup": "ffdhe2048"
00:21:10.590  }
00:21:10.590  }
00:21:10.590  ]'
00:21:10.590    13:47:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:10.590   13:47:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:21:10.590    13:47:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:10.590   13:47:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]]
00:21:10.590    13:47:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:10.850   13:47:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:10.850   13:47:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:10.850   13:47:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:10.850   13:47:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODEyZmE1YmJhN2Q1YjExMGE2NTcxNWJkNjcwMmE0NGM2MWQzMDQ1NmE2MWMwZmJlYzE4OTM4MzkzZjZjNjQwMgE7H48=:
00:21:10.850   13:47:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:ODEyZmE1YmJhN2Q1YjExMGE2NTcxNWJkNjcwMmE0NGM2MWQzMDQ1NmE2MWMwZmJlYzE4OTM4MzkzZjZjNjQwMgE7H48=:
00:21:11.783   13:47:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:11.783  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:11.783   13:47:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:21:11.783   13:47:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:11.783   13:47:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:11.783   13:47:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:11.783   13:47:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:21:11.783   13:47:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:11.783   13:47:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072
00:21:11.783   13:47:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072
00:21:11.783   13:47:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0
00:21:11.783   13:47:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:11.783   13:47:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:21:11.783   13:47:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072
00:21:11.784   13:47:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:21:11.784   13:47:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:11.784   13:47:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:11.784   13:47:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:11.784   13:47:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:11.784   13:47:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:11.784   13:47:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:11.784   13:47:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:11.784   13:47:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:12.042  
00:21:12.042    13:47:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:12.042    13:47:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:12.042    13:47:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:12.299   13:47:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:12.300    13:47:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:12.300    13:47:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:12.300    13:47:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:12.300    13:47:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:12.300   13:47:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:12.300  {
00:21:12.300  "cntlid": 17,
00:21:12.300  "qid": 0,
00:21:12.300  "state": "enabled",
00:21:12.300  "thread": "nvmf_tgt_poll_group_000",
00:21:12.300  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:21:12.300  "listen_address": {
00:21:12.300  "trtype": "RDMA",
00:21:12.300  "adrfam": "IPv4",
00:21:12.300  "traddr": "192.168.100.8",
00:21:12.300  "trsvcid": "4420"
00:21:12.300  },
00:21:12.300  "peer_address": {
00:21:12.300  "trtype": "RDMA",
00:21:12.300  "adrfam": "IPv4",
00:21:12.300  "traddr": "192.168.100.8",
00:21:12.300  "trsvcid": "55041"
00:21:12.300  },
00:21:12.300  "auth": {
00:21:12.300  "state": "completed",
00:21:12.300  "digest": "sha256",
00:21:12.300  "dhgroup": "ffdhe3072"
00:21:12.300  }
00:21:12.300  }
00:21:12.300  ]'
00:21:12.300    13:47:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:12.300   13:47:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:21:12.300    13:47:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:12.557   13:47:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]]
00:21:12.557    13:47:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:12.557   13:47:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:12.557   13:47:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:12.557   13:47:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:12.816   13:47:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjMwMWZmNWI5YmY1NDQ3ZDU2ZmYyYzM4ZTQxYjczZGJhYzcwZTE2ZDBmOTE5MTM0FgVPJA==: --dhchap-ctrl-secret DHHC-1:03:MDIyYTIwZDcxMDg5MTY4YTQ0NWYwYzE5YzJkYmE1NGU3MTcxNDI4YTdlMTJhMDA5NzgzZmNkOTE5OTA0OWNmOY86vbo=:
00:21:12.816   13:47:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:ZjMwMWZmNWI5YmY1NDQ3ZDU2ZmYyYzM4ZTQxYjczZGJhYzcwZTE2ZDBmOTE5MTM0FgVPJA==: --dhchap-ctrl-secret DHHC-1:03:MDIyYTIwZDcxMDg5MTY4YTQ0NWYwYzE5YzJkYmE1NGU3MTcxNDI4YTdlMTJhMDA5NzgzZmNkOTE5OTA0OWNmOY86vbo=:
00:21:13.385   13:47:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:13.385  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:13.385   13:47:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:21:13.385   13:47:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:13.385   13:47:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:13.385   13:47:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:13.385   13:47:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:13.385   13:47:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072
00:21:13.385   13:47:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072
00:21:13.644   13:47:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1
00:21:13.644   13:47:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:13.644   13:47:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:21:13.644   13:47:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072
00:21:13.644   13:47:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:21:13.644   13:47:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:13.645   13:47:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:13.645   13:47:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:13.645   13:47:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:13.645   13:47:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:13.645   13:47:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:13.645   13:47:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:13.645   13:47:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:13.904  
00:21:13.904    13:47:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:13.904    13:47:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:13.904    13:47:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:14.163   13:47:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:14.163    13:47:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:14.163    13:47:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:14.163    13:47:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:14.163    13:47:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:14.163   13:47:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:14.163  {
00:21:14.163  "cntlid": 19,
00:21:14.163  "qid": 0,
00:21:14.163  "state": "enabled",
00:21:14.163  "thread": "nvmf_tgt_poll_group_000",
00:21:14.163  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:21:14.163  "listen_address": {
00:21:14.163  "trtype": "RDMA",
00:21:14.163  "adrfam": "IPv4",
00:21:14.163  "traddr": "192.168.100.8",
00:21:14.163  "trsvcid": "4420"
00:21:14.163  },
00:21:14.163  "peer_address": {
00:21:14.163  "trtype": "RDMA",
00:21:14.163  "adrfam": "IPv4",
00:21:14.163  "traddr": "192.168.100.8",
00:21:14.163  "trsvcid": "36200"
00:21:14.163  },
00:21:14.163  "auth": {
00:21:14.163  "state": "completed",
00:21:14.163  "digest": "sha256",
00:21:14.163  "dhgroup": "ffdhe3072"
00:21:14.163  }
00:21:14.163  }
00:21:14.163  ]'
00:21:14.163    13:47:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:14.163   13:47:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:21:14.163    13:47:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:14.163   13:47:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]]
00:21:14.164    13:47:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:14.164   13:47:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:14.164   13:47:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:14.164   13:47:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:14.422   13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg4OWVkZGIwZGRiZDNlNmNkZGRkYmYyNTRjZTFhZDaPKOr9: --dhchap-ctrl-secret DHHC-1:02:ZjY3MzRkZjJjMDM5MzIxMzUzZjliOTczNjVhNjUzMjA0YWVjYWEwMTIwZjNkM2Q17wM3AA==:
00:21:14.422   13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:Mjg4OWVkZGIwZGRiZDNlNmNkZGRkYmYyNTRjZTFhZDaPKOr9: --dhchap-ctrl-secret DHHC-1:02:ZjY3MzRkZjJjMDM5MzIxMzUzZjliOTczNjVhNjUzMjA0YWVjYWEwMTIwZjNkM2Q17wM3AA==:
00:21:14.990   13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:15.249  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:15.249   13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:21:15.249   13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:15.249   13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:15.249   13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:15.249   13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:15.249   13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072
00:21:15.249   13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072
00:21:15.511   13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2
00:21:15.511   13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:15.511   13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:21:15.511   13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072
00:21:15.511   13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:21:15.511   13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:15.511   13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:15.511   13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:15.511   13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:15.511   13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:15.511   13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:15.511   13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:15.511   13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:15.772  
00:21:15.772    13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:15.772    13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:15.772    13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:15.772   13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:15.772    13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:15.772    13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:15.772    13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:16.031    13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:16.031   13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:16.031  {
00:21:16.031  "cntlid": 21,
00:21:16.031  "qid": 0,
00:21:16.031  "state": "enabled",
00:21:16.031  "thread": "nvmf_tgt_poll_group_000",
00:21:16.031  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:21:16.031  "listen_address": {
00:21:16.031  "trtype": "RDMA",
00:21:16.031  "adrfam": "IPv4",
00:21:16.031  "traddr": "192.168.100.8",
00:21:16.031  "trsvcid": "4420"
00:21:16.031  },
00:21:16.031  "peer_address": {
00:21:16.031  "trtype": "RDMA",
00:21:16.031  "adrfam": "IPv4",
00:21:16.031  "traddr": "192.168.100.8",
00:21:16.031  "trsvcid": "57792"
00:21:16.031  },
00:21:16.031  "auth": {
00:21:16.031  "state": "completed",
00:21:16.031  "digest": "sha256",
00:21:16.031  "dhgroup": "ffdhe3072"
00:21:16.031  }
00:21:16.031  }
00:21:16.031  ]'
00:21:16.031    13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:16.031   13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:21:16.031    13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:16.031   13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]]
00:21:16.031    13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:16.031   13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:16.031   13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:16.031   13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:16.290   13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjM4MWU5M2YzYjVjZDQzMTcwM2YxMzliNDdiYjc0ODQ5MmQ0OTBiODBiNzZhNzIwxZi5CA==: --dhchap-ctrl-secret DHHC-1:01:ODExNTY5YTA4ODM1OGNmN2ViMzE0MjM2OThkZjk3MTCjwNhC:
00:21:16.290   13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YjM4MWU5M2YzYjVjZDQzMTcwM2YxMzliNDdiYjc0ODQ5MmQ0OTBiODBiNzZhNzIwxZi5CA==: --dhchap-ctrl-secret DHHC-1:01:ODExNTY5YTA4ODM1OGNmN2ViMzE0MjM2OThkZjk3MTCjwNhC:
00:21:16.858   13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:16.858  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:16.858   13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:21:16.858   13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:16.858   13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:16.858   13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:16.858   13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:16.858   13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072
00:21:16.858   13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072
00:21:17.117   13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3
00:21:17.117   13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:17.117   13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:21:17.117   13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072
00:21:17.117   13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:21:17.117   13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:17.117   13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3
00:21:17.117   13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:17.117   13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:17.117   13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:17.117   13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:21:17.117   13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:21:17.118   13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:21:17.376  
00:21:17.376    13:47:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:17.376    13:47:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:17.376    13:47:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:17.636   13:47:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:17.636    13:47:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:17.636    13:47:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:17.636    13:47:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:17.636    13:47:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:17.636   13:47:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:17.636  {
00:21:17.636  "cntlid": 23,
00:21:17.636  "qid": 0,
00:21:17.636  "state": "enabled",
00:21:17.636  "thread": "nvmf_tgt_poll_group_000",
00:21:17.636  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:21:17.636  "listen_address": {
00:21:17.636  "trtype": "RDMA",
00:21:17.636  "adrfam": "IPv4",
00:21:17.636  "traddr": "192.168.100.8",
00:21:17.636  "trsvcid": "4420"
00:21:17.636  },
00:21:17.636  "peer_address": {
00:21:17.636  "trtype": "RDMA",
00:21:17.636  "adrfam": "IPv4",
00:21:17.636  "traddr": "192.168.100.8",
00:21:17.636  "trsvcid": "57395"
00:21:17.636  },
00:21:17.636  "auth": {
00:21:17.636  "state": "completed",
00:21:17.636  "digest": "sha256",
00:21:17.636  "dhgroup": "ffdhe3072"
00:21:17.636  }
00:21:17.636  }
00:21:17.636  ]'
00:21:17.636    13:47:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:17.636   13:47:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:21:17.636    13:47:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:17.636   13:47:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]]
00:21:17.636    13:47:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:17.895   13:47:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:17.895   13:47:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:17.895   13:47:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:17.895   13:47:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODEyZmE1YmJhN2Q1YjExMGE2NTcxNWJkNjcwMmE0NGM2MWQzMDQ1NmE2MWMwZmJlYzE4OTM4MzkzZjZjNjQwMgE7H48=:
00:21:17.895   13:47:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:ODEyZmE1YmJhN2Q1YjExMGE2NTcxNWJkNjcwMmE0NGM2MWQzMDQ1NmE2MWMwZmJlYzE4OTM4MzkzZjZjNjQwMgE7H48=:
00:21:18.833   13:47:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:18.833  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:18.833   13:47:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:21:18.833   13:47:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:18.833   13:47:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:18.833   13:47:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:18.833   13:47:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:21:18.833   13:47:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:18.833   13:47:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096
00:21:18.833   13:47:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096
00:21:18.833   13:47:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0
00:21:18.833   13:47:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:18.833   13:47:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:21:18.833   13:47:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096
00:21:18.833   13:47:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:21:18.833   13:47:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:18.833   13:47:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:18.833   13:47:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:18.833   13:47:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:18.833   13:47:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:18.833   13:47:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:18.833   13:47:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:18.833   13:47:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:19.092  
00:21:19.092    13:47:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:19.092    13:47:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:19.092    13:47:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:19.351   13:47:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:19.351    13:47:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:19.351    13:47:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:19.351    13:47:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:19.351    13:47:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:19.351   13:47:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:19.351  {
00:21:19.351  "cntlid": 25,
00:21:19.351  "qid": 0,
00:21:19.351  "state": "enabled",
00:21:19.351  "thread": "nvmf_tgt_poll_group_000",
00:21:19.351  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:21:19.351  "listen_address": {
00:21:19.351  "trtype": "RDMA",
00:21:19.351  "adrfam": "IPv4",
00:21:19.351  "traddr": "192.168.100.8",
00:21:19.351  "trsvcid": "4420"
00:21:19.351  },
00:21:19.351  "peer_address": {
00:21:19.351  "trtype": "RDMA",
00:21:19.351  "adrfam": "IPv4",
00:21:19.351  "traddr": "192.168.100.8",
00:21:19.351  "trsvcid": "56736"
00:21:19.351  },
00:21:19.351  "auth": {
00:21:19.351  "state": "completed",
00:21:19.351  "digest": "sha256",
00:21:19.351  "dhgroup": "ffdhe4096"
00:21:19.351  }
00:21:19.351  }
00:21:19.351  ]'
00:21:19.351    13:47:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:19.351   13:47:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:21:19.351    13:47:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:19.611   13:47:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]]
00:21:19.611    13:47:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:19.611   13:47:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:19.611   13:47:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:19.611   13:47:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:19.870   13:47:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjMwMWZmNWI5YmY1NDQ3ZDU2ZmYyYzM4ZTQxYjczZGJhYzcwZTE2ZDBmOTE5MTM0FgVPJA==: --dhchap-ctrl-secret DHHC-1:03:MDIyYTIwZDcxMDg5MTY4YTQ0NWYwYzE5YzJkYmE1NGU3MTcxNDI4YTdlMTJhMDA5NzgzZmNkOTE5OTA0OWNmOY86vbo=:
00:21:19.870   13:47:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:ZjMwMWZmNWI5YmY1NDQ3ZDU2ZmYyYzM4ZTQxYjczZGJhYzcwZTE2ZDBmOTE5MTM0FgVPJA==: --dhchap-ctrl-secret DHHC-1:03:MDIyYTIwZDcxMDg5MTY4YTQ0NWYwYzE5YzJkYmE1NGU3MTcxNDI4YTdlMTJhMDA5NzgzZmNkOTE5OTA0OWNmOY86vbo=:
00:21:20.438   13:47:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:20.438  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:20.438   13:47:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:21:20.438   13:47:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:20.438   13:47:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:20.438   13:47:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:20.438   13:47:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:20.438   13:47:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096
00:21:20.439   13:47:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096
00:21:20.698   13:47:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1
00:21:20.698   13:47:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:20.698   13:47:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:21:20.698   13:47:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096
00:21:20.698   13:47:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:21:20.698   13:47:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:20.698   13:47:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:20.698   13:47:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:20.698   13:47:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:20.698   13:47:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:20.698   13:47:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:20.698   13:47:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:20.698   13:47:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:20.971  
00:21:20.971    13:47:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:20.971    13:47:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:20.971    13:47:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:21.230   13:47:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:21.230    13:47:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:21.230    13:47:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:21.230    13:47:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:21.230    13:47:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:21.230   13:47:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:21.230  {
00:21:21.230  "cntlid": 27,
00:21:21.230  "qid": 0,
00:21:21.230  "state": "enabled",
00:21:21.230  "thread": "nvmf_tgt_poll_group_000",
00:21:21.230  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:21:21.230  "listen_address": {
00:21:21.230  "trtype": "RDMA",
00:21:21.230  "adrfam": "IPv4",
00:21:21.230  "traddr": "192.168.100.8",
00:21:21.230  "trsvcid": "4420"
00:21:21.230  },
00:21:21.230  "peer_address": {
00:21:21.230  "trtype": "RDMA",
00:21:21.230  "adrfam": "IPv4",
00:21:21.230  "traddr": "192.168.100.8",
00:21:21.230  "trsvcid": "58722"
00:21:21.230  },
00:21:21.230  "auth": {
00:21:21.230  "state": "completed",
00:21:21.230  "digest": "sha256",
00:21:21.230  "dhgroup": "ffdhe4096"
00:21:21.230  }
00:21:21.230  }
00:21:21.230  ]'
00:21:21.230    13:47:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:21.230   13:47:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:21:21.230    13:47:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:21.230   13:47:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]]
00:21:21.230    13:47:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:21.230   13:47:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:21.230   13:47:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:21.230   13:47:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:21.490   13:47:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg4OWVkZGIwZGRiZDNlNmNkZGRkYmYyNTRjZTFhZDaPKOr9: --dhchap-ctrl-secret DHHC-1:02:ZjY3MzRkZjJjMDM5MzIxMzUzZjliOTczNjVhNjUzMjA0YWVjYWEwMTIwZjNkM2Q17wM3AA==:
00:21:21.490   13:47:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:Mjg4OWVkZGIwZGRiZDNlNmNkZGRkYmYyNTRjZTFhZDaPKOr9: --dhchap-ctrl-secret DHHC-1:02:ZjY3MzRkZjJjMDM5MzIxMzUzZjliOTczNjVhNjUzMjA0YWVjYWEwMTIwZjNkM2Q17wM3AA==:
00:21:22.141   13:47:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:22.414  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:22.414   13:47:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:21:22.414   13:47:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:22.414   13:47:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:22.414   13:47:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:22.414   13:47:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:22.414   13:47:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096
00:21:22.414   13:47:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096
00:21:22.414   13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2
00:21:22.414   13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:22.414   13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:21:22.414   13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096
00:21:22.414   13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:21:22.414   13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:22.414   13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:22.414   13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:22.414   13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:22.414   13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:22.414   13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:22.414   13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:22.414   13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:22.681  
00:21:22.681    13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:22.681    13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:22.681    13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:22.940   13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:22.940    13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:22.940    13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:22.940    13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:22.940    13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:22.940   13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:22.940  {
00:21:22.940  "cntlid": 29,
00:21:22.940  "qid": 0,
00:21:22.940  "state": "enabled",
00:21:22.940  "thread": "nvmf_tgt_poll_group_000",
00:21:22.940  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:21:22.940  "listen_address": {
00:21:22.940  "trtype": "RDMA",
00:21:22.940  "adrfam": "IPv4",
00:21:22.940  "traddr": "192.168.100.8",
00:21:22.940  "trsvcid": "4420"
00:21:22.940  },
00:21:22.940  "peer_address": {
00:21:22.940  "trtype": "RDMA",
00:21:22.940  "adrfam": "IPv4",
00:21:22.940  "traddr": "192.168.100.8",
00:21:22.940  "trsvcid": "39673"
00:21:22.940  },
00:21:22.940  "auth": {
00:21:22.940  "state": "completed",
00:21:22.940  "digest": "sha256",
00:21:22.940  "dhgroup": "ffdhe4096"
00:21:22.940  }
00:21:22.940  }
00:21:22.940  ]'
00:21:22.940    13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:22.940   13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:21:22.940    13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:22.940   13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]]
00:21:22.940    13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:23.200   13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:23.200   13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:23.200   13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:23.200   13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjM4MWU5M2YzYjVjZDQzMTcwM2YxMzliNDdiYjc0ODQ5MmQ0OTBiODBiNzZhNzIwxZi5CA==: --dhchap-ctrl-secret DHHC-1:01:ODExNTY5YTA4ODM1OGNmN2ViMzE0MjM2OThkZjk3MTCjwNhC:
00:21:23.200   13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YjM4MWU5M2YzYjVjZDQzMTcwM2YxMzliNDdiYjc0ODQ5MmQ0OTBiODBiNzZhNzIwxZi5CA==: --dhchap-ctrl-secret DHHC-1:01:ODExNTY5YTA4ODM1OGNmN2ViMzE0MjM2OThkZjk3MTCjwNhC:
00:21:24.137   13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:24.137  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:24.137   13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:21:24.137   13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:24.137   13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:24.137   13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:24.137   13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:24.137   13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096
00:21:24.137   13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096
00:21:24.137   13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3
00:21:24.137   13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:24.137   13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:21:24.137   13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096
00:21:24.137   13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:21:24.137   13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:24.137   13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3
00:21:24.137   13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:24.137   13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:24.137   13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:24.137   13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:21:24.137   13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:21:24.137   13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:21:24.397  
00:21:24.656    13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:24.656    13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:24.656    13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:24.656   13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:24.656    13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:24.656    13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:24.656    13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:24.656    13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:24.656   13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:24.656  {
00:21:24.656  "cntlid": 31,
00:21:24.656  "qid": 0,
00:21:24.656  "state": "enabled",
00:21:24.656  "thread": "nvmf_tgt_poll_group_000",
00:21:24.656  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:21:24.656  "listen_address": {
00:21:24.656  "trtype": "RDMA",
00:21:24.656  "adrfam": "IPv4",
00:21:24.656  "traddr": "192.168.100.8",
00:21:24.656  "trsvcid": "4420"
00:21:24.656  },
00:21:24.656  "peer_address": {
00:21:24.656  "trtype": "RDMA",
00:21:24.656  "adrfam": "IPv4",
00:21:24.656  "traddr": "192.168.100.8",
00:21:24.656  "trsvcid": "55192"
00:21:24.656  },
00:21:24.656  "auth": {
00:21:24.656  "state": "completed",
00:21:24.656  "digest": "sha256",
00:21:24.656  "dhgroup": "ffdhe4096"
00:21:24.656  }
00:21:24.656  }
00:21:24.656  ]'
00:21:24.656    13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:24.656   13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:21:24.656    13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:24.915   13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]]
00:21:24.915    13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:24.915   13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:24.915   13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:24.915   13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:25.174   13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODEyZmE1YmJhN2Q1YjExMGE2NTcxNWJkNjcwMmE0NGM2MWQzMDQ1NmE2MWMwZmJlYzE4OTM4MzkzZjZjNjQwMgE7H48=:
00:21:25.174   13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:ODEyZmE1YmJhN2Q1YjExMGE2NTcxNWJkNjcwMmE0NGM2MWQzMDQ1NmE2MWMwZmJlYzE4OTM4MzkzZjZjNjQwMgE7H48=:
00:21:25.742   13:47:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:25.742  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:25.742   13:47:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:21:25.742   13:47:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:25.742   13:47:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:25.742   13:47:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:25.742   13:47:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:21:25.742   13:47:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:25.742   13:47:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144
00:21:25.742   13:47:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144
00:21:26.000   13:47:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0
00:21:26.000   13:47:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:26.000   13:47:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:21:26.000   13:47:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144
00:21:26.000   13:47:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:21:26.000   13:47:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:26.000   13:47:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:26.000   13:47:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:26.000   13:47:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:26.000   13:47:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:26.000   13:47:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:26.000   13:47:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:26.000   13:47:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:26.259  
00:21:26.259    13:47:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:26.259    13:47:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:26.259    13:47:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:26.518   13:47:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:26.518    13:47:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:26.518    13:47:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:26.518    13:47:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:26.518    13:47:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:26.518   13:47:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:26.518  {
00:21:26.518  "cntlid": 33,
00:21:26.518  "qid": 0,
00:21:26.518  "state": "enabled",
00:21:26.518  "thread": "nvmf_tgt_poll_group_000",
00:21:26.518  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:21:26.518  "listen_address": {
00:21:26.518  "trtype": "RDMA",
00:21:26.518  "adrfam": "IPv4",
00:21:26.518  "traddr": "192.168.100.8",
00:21:26.518  "trsvcid": "4420"
00:21:26.518  },
00:21:26.518  "peer_address": {
00:21:26.518  "trtype": "RDMA",
00:21:26.518  "adrfam": "IPv4",
00:21:26.518  "traddr": "192.168.100.8",
00:21:26.518  "trsvcid": "48650"
00:21:26.518  },
00:21:26.518  "auth": {
00:21:26.518  "state": "completed",
00:21:26.518  "digest": "sha256",
00:21:26.518  "dhgroup": "ffdhe6144"
00:21:26.518  }
00:21:26.518  }
00:21:26.518  ]'
00:21:26.518    13:47:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:26.518   13:47:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:21:26.518    13:47:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:26.518   13:47:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]]
00:21:26.518    13:47:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:26.778   13:47:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:26.778   13:47:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:26.778   13:47:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:26.778   13:47:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjMwMWZmNWI5YmY1NDQ3ZDU2ZmYyYzM4ZTQxYjczZGJhYzcwZTE2ZDBmOTE5MTM0FgVPJA==: --dhchap-ctrl-secret DHHC-1:03:MDIyYTIwZDcxMDg5MTY4YTQ0NWYwYzE5YzJkYmE1NGU3MTcxNDI4YTdlMTJhMDA5NzgzZmNkOTE5OTA0OWNmOY86vbo=:
00:21:26.778   13:47:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:ZjMwMWZmNWI5YmY1NDQ3ZDU2ZmYyYzM4ZTQxYjczZGJhYzcwZTE2ZDBmOTE5MTM0FgVPJA==: --dhchap-ctrl-secret DHHC-1:03:MDIyYTIwZDcxMDg5MTY4YTQ0NWYwYzE5YzJkYmE1NGU3MTcxNDI4YTdlMTJhMDA5NzgzZmNkOTE5OTA0OWNmOY86vbo=:
00:21:27.716   13:47:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:27.716  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:27.716   13:47:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:21:27.716   13:47:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:27.716   13:47:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:27.716   13:47:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:27.716   13:47:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:27.716   13:47:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144
00:21:27.716   13:47:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144
00:21:27.716   13:47:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1
00:21:27.716   13:47:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:27.716   13:47:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:21:27.716   13:47:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144
00:21:27.716   13:47:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:21:27.716   13:47:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:27.716   13:47:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:27.716   13:47:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:27.716   13:47:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:27.716   13:47:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:27.716   13:47:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:27.716   13:47:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:27.716   13:47:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:28.284  
00:21:28.284    13:47:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:28.284    13:47:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:28.284    13:47:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:28.284   13:47:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:28.284    13:47:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:28.284    13:47:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:28.284    13:47:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:28.285    13:47:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:28.285   13:47:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:28.285  {
00:21:28.285  "cntlid": 35,
00:21:28.285  "qid": 0,
00:21:28.285  "state": "enabled",
00:21:28.285  "thread": "nvmf_tgt_poll_group_000",
00:21:28.285  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:21:28.285  "listen_address": {
00:21:28.285  "trtype": "RDMA",
00:21:28.285  "adrfam": "IPv4",
00:21:28.285  "traddr": "192.168.100.8",
00:21:28.285  "trsvcid": "4420"
00:21:28.285  },
00:21:28.285  "peer_address": {
00:21:28.285  "trtype": "RDMA",
00:21:28.285  "adrfam": "IPv4",
00:21:28.285  "traddr": "192.168.100.8",
00:21:28.285  "trsvcid": "54433"
00:21:28.285  },
00:21:28.285  "auth": {
00:21:28.285  "state": "completed",
00:21:28.285  "digest": "sha256",
00:21:28.285  "dhgroup": "ffdhe6144"
00:21:28.285  }
00:21:28.285  }
00:21:28.285  ]'
00:21:28.285    13:47:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:28.544   13:47:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:21:28.544    13:47:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:28.544   13:47:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]]
00:21:28.544    13:47:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:28.544   13:47:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:28.544   13:47:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:28.544   13:47:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:28.802   13:47:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg4OWVkZGIwZGRiZDNlNmNkZGRkYmYyNTRjZTFhZDaPKOr9: --dhchap-ctrl-secret DHHC-1:02:ZjY3MzRkZjJjMDM5MzIxMzUzZjliOTczNjVhNjUzMjA0YWVjYWEwMTIwZjNkM2Q17wM3AA==:
00:21:28.803   13:47:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:Mjg4OWVkZGIwZGRiZDNlNmNkZGRkYmYyNTRjZTFhZDaPKOr9: --dhchap-ctrl-secret DHHC-1:02:ZjY3MzRkZjJjMDM5MzIxMzUzZjliOTczNjVhNjUzMjA0YWVjYWEwMTIwZjNkM2Q17wM3AA==:
00:21:29.371   13:47:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:29.371  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:29.371   13:47:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:21:29.371   13:47:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:29.371   13:47:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:29.371   13:47:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:29.371   13:47:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:29.371   13:47:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144
00:21:29.371   13:47:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144
00:21:29.630   13:47:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2
00:21:29.630   13:47:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:29.630   13:47:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:21:29.630   13:47:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144
00:21:29.630   13:47:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:21:29.630   13:47:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:29.630   13:47:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:29.630   13:47:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:29.630   13:47:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:29.630   13:47:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:29.630   13:47:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:29.630   13:47:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:29.630   13:47:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:29.889  
00:21:29.889    13:47:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:29.890    13:47:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:29.890    13:47:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:30.148   13:47:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:30.148    13:47:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:30.148    13:47:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:30.148    13:47:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:30.148    13:47:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:30.148   13:47:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:30.148  {
00:21:30.148  "cntlid": 37,
00:21:30.148  "qid": 0,
00:21:30.148  "state": "enabled",
00:21:30.148  "thread": "nvmf_tgt_poll_group_000",
00:21:30.149  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:21:30.149  "listen_address": {
00:21:30.149  "trtype": "RDMA",
00:21:30.149  "adrfam": "IPv4",
00:21:30.149  "traddr": "192.168.100.8",
00:21:30.149  "trsvcid": "4420"
00:21:30.149  },
00:21:30.149  "peer_address": {
00:21:30.149  "trtype": "RDMA",
00:21:30.149  "adrfam": "IPv4",
00:21:30.149  "traddr": "192.168.100.8",
00:21:30.149  "trsvcid": "47587"
00:21:30.149  },
00:21:30.149  "auth": {
00:21:30.149  "state": "completed",
00:21:30.149  "digest": "sha256",
00:21:30.149  "dhgroup": "ffdhe6144"
00:21:30.149  }
00:21:30.149  }
00:21:30.149  ]'
00:21:30.149    13:47:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:30.149   13:47:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:21:30.149    13:47:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:30.408   13:47:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]]
00:21:30.408    13:47:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:30.408   13:47:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:30.408   13:47:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:30.408   13:47:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:30.667   13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjM4MWU5M2YzYjVjZDQzMTcwM2YxMzliNDdiYjc0ODQ5MmQ0OTBiODBiNzZhNzIwxZi5CA==: --dhchap-ctrl-secret DHHC-1:01:ODExNTY5YTA4ODM1OGNmN2ViMzE0MjM2OThkZjk3MTCjwNhC:
00:21:30.667   13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YjM4MWU5M2YzYjVjZDQzMTcwM2YxMzliNDdiYjc0ODQ5MmQ0OTBiODBiNzZhNzIwxZi5CA==: --dhchap-ctrl-secret DHHC-1:01:ODExNTY5YTA4ODM1OGNmN2ViMzE0MjM2OThkZjk3MTCjwNhC:
00:21:31.236   13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:31.236  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:31.236   13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:21:31.236   13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:31.236   13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:31.236   13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:31.236   13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:31.236   13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144
00:21:31.236   13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144
00:21:31.496   13:47:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3
00:21:31.496   13:47:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:31.496   13:47:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:21:31.496   13:47:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144
00:21:31.496   13:47:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:21:31.496   13:47:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:31.496   13:47:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3
00:21:31.496   13:47:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:31.496   13:47:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:31.496   13:47:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:31.496   13:47:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:21:31.496   13:47:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:21:31.496   13:47:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:21:31.756  
00:21:31.756    13:47:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:31.756    13:47:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:31.756    13:47:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:32.015   13:47:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:32.015    13:47:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:32.015    13:47:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:32.015    13:47:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:32.015    13:47:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:32.015   13:47:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:32.015  {
00:21:32.015  "cntlid": 39,
00:21:32.015  "qid": 0,
00:21:32.015  "state": "enabled",
00:21:32.015  "thread": "nvmf_tgt_poll_group_000",
00:21:32.015  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:21:32.015  "listen_address": {
00:21:32.015  "trtype": "RDMA",
00:21:32.015  "adrfam": "IPv4",
00:21:32.015  "traddr": "192.168.100.8",
00:21:32.015  "trsvcid": "4420"
00:21:32.015  },
00:21:32.015  "peer_address": {
00:21:32.015  "trtype": "RDMA",
00:21:32.015  "adrfam": "IPv4",
00:21:32.015  "traddr": "192.168.100.8",
00:21:32.015  "trsvcid": "54392"
00:21:32.015  },
00:21:32.015  "auth": {
00:21:32.015  "state": "completed",
00:21:32.015  "digest": "sha256",
00:21:32.015  "dhgroup": "ffdhe6144"
00:21:32.015  }
00:21:32.015  }
00:21:32.015  ]'
00:21:32.015    13:47:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:32.015   13:47:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:21:32.015    13:47:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:32.015   13:47:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]]
00:21:32.015    13:47:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:32.274   13:47:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:32.275   13:47:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:32.275   13:47:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:32.275   13:47:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODEyZmE1YmJhN2Q1YjExMGE2NTcxNWJkNjcwMmE0NGM2MWQzMDQ1NmE2MWMwZmJlYzE4OTM4MzkzZjZjNjQwMgE7H48=:
00:21:32.275   13:47:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:ODEyZmE1YmJhN2Q1YjExMGE2NTcxNWJkNjcwMmE0NGM2MWQzMDQ1NmE2MWMwZmJlYzE4OTM4MzkzZjZjNjQwMgE7H48=:
00:21:32.843   13:47:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:33.103  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:33.103   13:47:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:21:33.103   13:47:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:33.103   13:47:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:33.103   13:47:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:33.103   13:47:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:21:33.103   13:47:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:33.103   13:47:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192
00:21:33.103   13:47:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192
00:21:33.363   13:47:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0
00:21:33.363   13:47:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:33.363   13:47:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:21:33.363   13:47:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192
00:21:33.363   13:47:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:21:33.363   13:47:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:33.363   13:47:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:33.363   13:47:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:33.363   13:47:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:33.363   13:47:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:33.363   13:47:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:33.363   13:47:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:33.363   13:47:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:33.622  
00:21:33.623    13:47:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:33.623    13:47:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:33.623    13:47:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:33.882   13:47:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:33.882    13:47:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:33.882    13:47:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:33.882    13:47:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:33.882    13:47:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:33.882   13:47:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:33.882  {
00:21:33.882  "cntlid": 41,
00:21:33.882  "qid": 0,
00:21:33.882  "state": "enabled",
00:21:33.882  "thread": "nvmf_tgt_poll_group_000",
00:21:33.882  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:21:33.882  "listen_address": {
00:21:33.882  "trtype": "RDMA",
00:21:33.882  "adrfam": "IPv4",
00:21:33.882  "traddr": "192.168.100.8",
00:21:33.882  "trsvcid": "4420"
00:21:33.882  },
00:21:33.882  "peer_address": {
00:21:33.882  "trtype": "RDMA",
00:21:33.882  "adrfam": "IPv4",
00:21:33.882  "traddr": "192.168.100.8",
00:21:33.882  "trsvcid": "52477"
00:21:33.882  },
00:21:33.882  "auth": {
00:21:33.882  "state": "completed",
00:21:33.882  "digest": "sha256",
00:21:33.882  "dhgroup": "ffdhe8192"
00:21:33.882  }
00:21:33.882  }
00:21:33.882  ]'
00:21:33.882    13:47:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:33.882   13:47:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:21:33.882    13:47:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:34.142   13:47:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]]
00:21:34.142    13:47:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:34.142   13:47:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:34.142   13:47:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:34.142   13:47:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:34.401   13:47:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjMwMWZmNWI5YmY1NDQ3ZDU2ZmYyYzM4ZTQxYjczZGJhYzcwZTE2ZDBmOTE5MTM0FgVPJA==: --dhchap-ctrl-secret DHHC-1:03:MDIyYTIwZDcxMDg5MTY4YTQ0NWYwYzE5YzJkYmE1NGU3MTcxNDI4YTdlMTJhMDA5NzgzZmNkOTE5OTA0OWNmOY86vbo=:
00:21:34.401   13:47:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:ZjMwMWZmNWI5YmY1NDQ3ZDU2ZmYyYzM4ZTQxYjczZGJhYzcwZTE2ZDBmOTE5MTM0FgVPJA==: --dhchap-ctrl-secret DHHC-1:03:MDIyYTIwZDcxMDg5MTY4YTQ0NWYwYzE5YzJkYmE1NGU3MTcxNDI4YTdlMTJhMDA5NzgzZmNkOTE5OTA0OWNmOY86vbo=:
00:21:34.970   13:47:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:34.970  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:34.970   13:47:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:21:34.970   13:47:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:34.970   13:47:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:34.970   13:47:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:34.970   13:47:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:34.970   13:47:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192
00:21:34.970   13:47:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192
00:21:35.230   13:47:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1
00:21:35.230   13:47:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:35.230   13:47:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:21:35.230   13:47:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192
00:21:35.230   13:47:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:21:35.230   13:47:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:35.230   13:47:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:35.230   13:47:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:35.230   13:47:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:35.230   13:47:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:35.230   13:47:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:35.230   13:47:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:35.230   13:47:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:35.798  
00:21:35.798    13:47:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:35.798    13:47:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:35.798    13:47:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:35.798   13:47:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:35.798    13:47:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:35.799    13:47:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:35.799    13:47:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:35.799    13:47:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:36.057   13:47:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:36.057  {
00:21:36.057  "cntlid": 43,
00:21:36.057  "qid": 0,
00:21:36.057  "state": "enabled",
00:21:36.057  "thread": "nvmf_tgt_poll_group_000",
00:21:36.057  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:21:36.057  "listen_address": {
00:21:36.057  "trtype": "RDMA",
00:21:36.057  "adrfam": "IPv4",
00:21:36.057  "traddr": "192.168.100.8",
00:21:36.057  "trsvcid": "4420"
00:21:36.057  },
00:21:36.057  "peer_address": {
00:21:36.057  "trtype": "RDMA",
00:21:36.057  "adrfam": "IPv4",
00:21:36.057  "traddr": "192.168.100.8",
00:21:36.057  "trsvcid": "47248"
00:21:36.057  },
00:21:36.057  "auth": {
00:21:36.057  "state": "completed",
00:21:36.057  "digest": "sha256",
00:21:36.057  "dhgroup": "ffdhe8192"
00:21:36.057  }
00:21:36.057  }
00:21:36.057  ]'
00:21:36.057    13:47:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:36.057   13:47:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:21:36.057    13:47:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:36.057   13:47:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]]
00:21:36.057    13:47:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:36.057   13:47:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:36.057   13:47:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:36.057   13:47:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:36.316   13:47:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg4OWVkZGIwZGRiZDNlNmNkZGRkYmYyNTRjZTFhZDaPKOr9: --dhchap-ctrl-secret DHHC-1:02:ZjY3MzRkZjJjMDM5MzIxMzUzZjliOTczNjVhNjUzMjA0YWVjYWEwMTIwZjNkM2Q17wM3AA==:
00:21:36.317   13:47:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:Mjg4OWVkZGIwZGRiZDNlNmNkZGRkYmYyNTRjZTFhZDaPKOr9: --dhchap-ctrl-secret DHHC-1:02:ZjY3MzRkZjJjMDM5MzIxMzUzZjliOTczNjVhNjUzMjA0YWVjYWEwMTIwZjNkM2Q17wM3AA==:
00:21:36.885   13:47:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:36.885  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:36.885   13:47:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:21:36.885   13:47:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:36.885   13:47:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:36.885   13:47:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:36.886   13:47:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:36.886   13:47:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192
00:21:36.886   13:47:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192
00:21:37.152   13:47:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2
00:21:37.152   13:47:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:37.152   13:47:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:21:37.152   13:47:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192
00:21:37.152   13:47:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:21:37.152   13:47:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:37.152   13:47:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:37.152   13:47:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:37.152   13:47:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:37.152   13:47:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:37.152   13:47:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:37.152   13:47:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:37.152   13:47:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:37.721  
00:21:37.721    13:47:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:37.721    13:47:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:37.721    13:47:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:37.981   13:47:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:37.981    13:47:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:37.981    13:47:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:37.981    13:47:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:37.981    13:47:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:37.981   13:47:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:37.981  {
00:21:37.981  "cntlid": 45,
00:21:37.981  "qid": 0,
00:21:37.981  "state": "enabled",
00:21:37.981  "thread": "nvmf_tgt_poll_group_000",
00:21:37.981  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:21:37.981  "listen_address": {
00:21:37.981  "trtype": "RDMA",
00:21:37.981  "adrfam": "IPv4",
00:21:37.981  "traddr": "192.168.100.8",
00:21:37.981  "trsvcid": "4420"
00:21:37.981  },
00:21:37.981  "peer_address": {
00:21:37.981  "trtype": "RDMA",
00:21:37.981  "adrfam": "IPv4",
00:21:37.981  "traddr": "192.168.100.8",
00:21:37.981  "trsvcid": "38705"
00:21:37.981  },
00:21:37.981  "auth": {
00:21:37.981  "state": "completed",
00:21:37.981  "digest": "sha256",
00:21:37.981  "dhgroup": "ffdhe8192"
00:21:37.981  }
00:21:37.981  }
00:21:37.981  ]'
00:21:37.981    13:47:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:37.981   13:47:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:21:37.981    13:47:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:37.982   13:47:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]]
00:21:37.982    13:47:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:37.982   13:47:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:37.982   13:47:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:37.982   13:47:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:38.241   13:47:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjM4MWU5M2YzYjVjZDQzMTcwM2YxMzliNDdiYjc0ODQ5MmQ0OTBiODBiNzZhNzIwxZi5CA==: --dhchap-ctrl-secret DHHC-1:01:ODExNTY5YTA4ODM1OGNmN2ViMzE0MjM2OThkZjk3MTCjwNhC:
00:21:38.241   13:47:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YjM4MWU5M2YzYjVjZDQzMTcwM2YxMzliNDdiYjc0ODQ5MmQ0OTBiODBiNzZhNzIwxZi5CA==: --dhchap-ctrl-secret DHHC-1:01:ODExNTY5YTA4ODM1OGNmN2ViMzE0MjM2OThkZjk3MTCjwNhC:
00:21:38.810   13:47:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:39.070  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:39.070   13:47:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:21:39.070   13:47:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:39.070   13:47:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:39.070   13:47:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:39.070   13:47:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:39.070   13:47:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192
00:21:39.070   13:47:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192
00:21:39.070   13:47:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3
00:21:39.070   13:47:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:39.070   13:47:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:21:39.070   13:47:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192
00:21:39.070   13:47:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:21:39.070   13:47:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:39.070   13:47:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3
00:21:39.070   13:47:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:39.070   13:47:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:39.070   13:47:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:39.070   13:47:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:21:39.070   13:47:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:21:39.070   13:47:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:21:39.639  
00:21:39.639    13:47:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:39.639    13:47:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:39.639    13:47:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:39.898   13:47:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:39.898    13:47:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:39.898    13:47:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:39.898    13:47:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:39.898    13:47:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:39.898   13:47:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:39.898  {
00:21:39.898  "cntlid": 47,
00:21:39.898  "qid": 0,
00:21:39.898  "state": "enabled",
00:21:39.898  "thread": "nvmf_tgt_poll_group_000",
00:21:39.898  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:21:39.898  "listen_address": {
00:21:39.898  "trtype": "RDMA",
00:21:39.898  "adrfam": "IPv4",
00:21:39.898  "traddr": "192.168.100.8",
00:21:39.898  "trsvcid": "4420"
00:21:39.898  },
00:21:39.898  "peer_address": {
00:21:39.898  "trtype": "RDMA",
00:21:39.898  "adrfam": "IPv4",
00:21:39.898  "traddr": "192.168.100.8",
00:21:39.898  "trsvcid": "55769"
00:21:39.898  },
00:21:39.898  "auth": {
00:21:39.898  "state": "completed",
00:21:39.898  "digest": "sha256",
00:21:39.898  "dhgroup": "ffdhe8192"
00:21:39.898  }
00:21:39.898  }
00:21:39.898  ]'
00:21:39.898    13:47:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:39.898   13:47:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:21:39.898    13:47:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:39.898   13:47:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]]
00:21:39.898    13:47:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:40.158   13:47:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:40.158   13:47:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:40.158   13:47:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:40.158   13:47:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODEyZmE1YmJhN2Q1YjExMGE2NTcxNWJkNjcwMmE0NGM2MWQzMDQ1NmE2MWMwZmJlYzE4OTM4MzkzZjZjNjQwMgE7H48=:
00:21:40.158   13:47:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:ODEyZmE1YmJhN2Q1YjExMGE2NTcxNWJkNjcwMmE0NGM2MWQzMDQ1NmE2MWMwZmJlYzE4OTM4MzkzZjZjNjQwMgE7H48=:
00:21:40.726   13:47:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:40.986  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:40.986   13:47:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:21:40.986   13:47:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:40.986   13:47:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:40.986   13:47:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:40.986   13:47:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}"
00:21:40.986   13:47:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:21:40.986   13:47:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:40.986   13:47:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null
00:21:40.986   13:47:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null
00:21:41.245   13:47:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0
00:21:41.245   13:47:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:41.245   13:47:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:21:41.245   13:47:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null
00:21:41.245   13:47:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:21:41.245   13:47:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:41.245   13:47:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:41.245   13:47:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:41.245   13:47:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:41.245   13:47:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:41.245   13:47:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:41.245   13:47:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:41.245   13:47:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:41.505  
00:21:41.505    13:47:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:41.505    13:47:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:41.505    13:47:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:41.764   13:47:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:41.764    13:47:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:41.764    13:47:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:41.764    13:47:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:41.764    13:47:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:41.764   13:47:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:41.764  {
00:21:41.764  "cntlid": 49,
00:21:41.764  "qid": 0,
00:21:41.764  "state": "enabled",
00:21:41.764  "thread": "nvmf_tgt_poll_group_000",
00:21:41.764  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:21:41.764  "listen_address": {
00:21:41.764  "trtype": "RDMA",
00:21:41.764  "adrfam": "IPv4",
00:21:41.764  "traddr": "192.168.100.8",
00:21:41.764  "trsvcid": "4420"
00:21:41.764  },
00:21:41.764  "peer_address": {
00:21:41.764  "trtype": "RDMA",
00:21:41.764  "adrfam": "IPv4",
00:21:41.764  "traddr": "192.168.100.8",
00:21:41.765  "trsvcid": "46549"
00:21:41.765  },
00:21:41.765  "auth": {
00:21:41.765  "state": "completed",
00:21:41.765  "digest": "sha384",
00:21:41.765  "dhgroup": "null"
00:21:41.765  }
00:21:41.765  }
00:21:41.765  ]'
00:21:41.765    13:47:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:41.765   13:47:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:21:41.765    13:47:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:41.765   13:47:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]]
00:21:41.765    13:47:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:41.765   13:47:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:41.765   13:47:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:41.765   13:47:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:42.024   13:47:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjMwMWZmNWI5YmY1NDQ3ZDU2ZmYyYzM4ZTQxYjczZGJhYzcwZTE2ZDBmOTE5MTM0FgVPJA==: --dhchap-ctrl-secret DHHC-1:03:MDIyYTIwZDcxMDg5MTY4YTQ0NWYwYzE5YzJkYmE1NGU3MTcxNDI4YTdlMTJhMDA5NzgzZmNkOTE5OTA0OWNmOY86vbo=:
00:21:42.024   13:47:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:ZjMwMWZmNWI5YmY1NDQ3ZDU2ZmYyYzM4ZTQxYjczZGJhYzcwZTE2ZDBmOTE5MTM0FgVPJA==: --dhchap-ctrl-secret DHHC-1:03:MDIyYTIwZDcxMDg5MTY4YTQ0NWYwYzE5YzJkYmE1NGU3MTcxNDI4YTdlMTJhMDA5NzgzZmNkOTE5OTA0OWNmOY86vbo=:
00:21:42.596   13:47:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:42.596  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:42.596   13:47:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:21:42.596   13:47:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:42.596   13:47:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:42.596   13:47:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:42.596   13:47:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:42.596   13:47:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null
00:21:42.596   13:47:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null
00:21:42.856   13:47:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1
00:21:42.856   13:47:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:42.856   13:47:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:21:42.856   13:47:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null
00:21:42.856   13:47:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:21:42.856   13:47:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:42.856   13:47:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:42.856   13:47:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:42.856   13:47:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:42.856   13:47:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:42.856   13:47:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:42.856   13:47:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:42.856   13:47:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:43.115  
00:21:43.115    13:47:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:43.115    13:47:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:43.115    13:47:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:43.374   13:47:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:43.374    13:47:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:43.374    13:47:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:43.374    13:47:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:43.374    13:47:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:43.374   13:47:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:43.374  {
00:21:43.374  "cntlid": 51,
00:21:43.374  "qid": 0,
00:21:43.374  "state": "enabled",
00:21:43.374  "thread": "nvmf_tgt_poll_group_000",
00:21:43.374  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:21:43.374  "listen_address": {
00:21:43.374  "trtype": "RDMA",
00:21:43.374  "adrfam": "IPv4",
00:21:43.374  "traddr": "192.168.100.8",
00:21:43.374  "trsvcid": "4420"
00:21:43.374  },
00:21:43.374  "peer_address": {
00:21:43.374  "trtype": "RDMA",
00:21:43.374  "adrfam": "IPv4",
00:21:43.374  "traddr": "192.168.100.8",
00:21:43.374  "trsvcid": "55094"
00:21:43.374  },
00:21:43.374  "auth": {
00:21:43.374  "state": "completed",
00:21:43.374  "digest": "sha384",
00:21:43.374  "dhgroup": "null"
00:21:43.374  }
00:21:43.374  }
00:21:43.374  ]'
00:21:43.374    13:47:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:43.374   13:47:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:21:43.374    13:47:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:43.374   13:47:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]]
00:21:43.374    13:47:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:43.375   13:47:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:43.375   13:47:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:43.375   13:47:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:43.634   13:47:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg4OWVkZGIwZGRiZDNlNmNkZGRkYmYyNTRjZTFhZDaPKOr9: --dhchap-ctrl-secret DHHC-1:02:ZjY3MzRkZjJjMDM5MzIxMzUzZjliOTczNjVhNjUzMjA0YWVjYWEwMTIwZjNkM2Q17wM3AA==:
00:21:43.634   13:47:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:Mjg4OWVkZGIwZGRiZDNlNmNkZGRkYmYyNTRjZTFhZDaPKOr9: --dhchap-ctrl-secret DHHC-1:02:ZjY3MzRkZjJjMDM5MzIxMzUzZjliOTczNjVhNjUzMjA0YWVjYWEwMTIwZjNkM2Q17wM3AA==:
00:21:44.291   13:47:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:44.581  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:44.581   13:47:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:21:44.581   13:47:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:44.581   13:47:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:44.581   13:47:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:44.581   13:47:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:44.581   13:47:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null
00:21:44.581   13:47:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null
00:21:44.581   13:47:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2
00:21:44.581   13:47:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:44.581   13:47:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:21:44.581   13:47:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null
00:21:44.581   13:47:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:21:44.581   13:47:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:44.581   13:47:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:44.581   13:47:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:44.581   13:47:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:44.581   13:47:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:44.581   13:47:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:44.581   13:47:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:44.581   13:47:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:44.840  
00:21:44.840    13:47:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:44.840    13:47:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:44.840    13:47:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:45.099   13:47:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:45.099    13:47:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:45.099    13:47:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:45.099    13:47:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:45.099    13:47:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:45.099   13:47:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:45.099  {
00:21:45.099  "cntlid": 53,
00:21:45.099  "qid": 0,
00:21:45.099  "state": "enabled",
00:21:45.099  "thread": "nvmf_tgt_poll_group_000",
00:21:45.099  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:21:45.099  "listen_address": {
00:21:45.099  "trtype": "RDMA",
00:21:45.099  "adrfam": "IPv4",
00:21:45.099  "traddr": "192.168.100.8",
00:21:45.099  "trsvcid": "4420"
00:21:45.099  },
00:21:45.099  "peer_address": {
00:21:45.099  "trtype": "RDMA",
00:21:45.099  "adrfam": "IPv4",
00:21:45.099  "traddr": "192.168.100.8",
00:21:45.099  "trsvcid": "41547"
00:21:45.099  },
00:21:45.099  "auth": {
00:21:45.099  "state": "completed",
00:21:45.099  "digest": "sha384",
00:21:45.099  "dhgroup": "null"
00:21:45.099  }
00:21:45.099  }
00:21:45.099  ]'
00:21:45.099    13:47:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:45.099   13:47:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:21:45.099    13:47:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:45.099   13:47:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]]
00:21:45.099    13:47:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:45.358   13:47:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:45.358   13:47:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:45.358   13:47:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:45.358   13:47:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjM4MWU5M2YzYjVjZDQzMTcwM2YxMzliNDdiYjc0ODQ5MmQ0OTBiODBiNzZhNzIwxZi5CA==: --dhchap-ctrl-secret DHHC-1:01:ODExNTY5YTA4ODM1OGNmN2ViMzE0MjM2OThkZjk3MTCjwNhC:
00:21:45.358   13:47:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YjM4MWU5M2YzYjVjZDQzMTcwM2YxMzliNDdiYjc0ODQ5MmQ0OTBiODBiNzZhNzIwxZi5CA==: --dhchap-ctrl-secret DHHC-1:01:ODExNTY5YTA4ODM1OGNmN2ViMzE0MjM2OThkZjk3MTCjwNhC:
00:21:45.925   13:47:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:46.184  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:46.184   13:47:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:21:46.184   13:47:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:46.184   13:47:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:46.184   13:47:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:46.184   13:47:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:46.184   13:47:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null
00:21:46.184   13:47:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null
00:21:46.443   13:47:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3
00:21:46.443   13:47:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:46.443   13:47:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:21:46.443   13:47:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null
00:21:46.443   13:47:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:21:46.443   13:47:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:46.443   13:47:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3
00:21:46.443   13:47:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:46.443   13:47:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:46.443   13:47:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:46.443   13:47:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:21:46.443   13:47:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:21:46.443   13:47:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:21:46.703  
00:21:46.703    13:47:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:46.703    13:47:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:46.703    13:47:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:46.703   13:47:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:46.703    13:47:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:46.703    13:47:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:46.703    13:47:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:46.962    13:47:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:46.962   13:47:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:46.962  {
00:21:46.962  "cntlid": 55,
00:21:46.962  "qid": 0,
00:21:46.962  "state": "enabled",
00:21:46.962  "thread": "nvmf_tgt_poll_group_000",
00:21:46.962  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:21:46.962  "listen_address": {
00:21:46.962  "trtype": "RDMA",
00:21:46.962  "adrfam": "IPv4",
00:21:46.962  "traddr": "192.168.100.8",
00:21:46.962  "trsvcid": "4420"
00:21:46.962  },
00:21:46.962  "peer_address": {
00:21:46.962  "trtype": "RDMA",
00:21:46.962  "adrfam": "IPv4",
00:21:46.962  "traddr": "192.168.100.8",
00:21:46.962  "trsvcid": "55674"
00:21:46.962  },
00:21:46.962  "auth": {
00:21:46.962  "state": "completed",
00:21:46.962  "digest": "sha384",
00:21:46.962  "dhgroup": "null"
00:21:46.962  }
00:21:46.962  }
00:21:46.962  ]'
00:21:46.962    13:47:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:46.962   13:47:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:21:46.962    13:47:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:46.962   13:47:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]]
00:21:46.962    13:47:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:46.962   13:47:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:46.962   13:47:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:46.962   13:47:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:47.222   13:47:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODEyZmE1YmJhN2Q1YjExMGE2NTcxNWJkNjcwMmE0NGM2MWQzMDQ1NmE2MWMwZmJlYzE4OTM4MzkzZjZjNjQwMgE7H48=:
00:21:47.222   13:47:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:ODEyZmE1YmJhN2Q1YjExMGE2NTcxNWJkNjcwMmE0NGM2MWQzMDQ1NmE2MWMwZmJlYzE4OTM4MzkzZjZjNjQwMgE7H48=:
00:21:47.791   13:47:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:47.791  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:47.791   13:47:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:21:47.791   13:47:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:47.791   13:47:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:47.791   13:47:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:47.791   13:47:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:21:47.791   13:47:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:47.791   13:47:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048
00:21:47.791   13:47:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048
00:21:48.051   13:47:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0
00:21:48.051   13:47:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:48.051   13:47:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:21:48.051   13:47:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048
00:21:48.051   13:47:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:21:48.051   13:47:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:48.051   13:47:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:48.051   13:47:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:48.051   13:47:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:48.051   13:47:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:48.051   13:47:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:48.051   13:47:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:48.051   13:47:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:48.310  
00:21:48.310    13:47:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:48.310    13:47:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:48.310    13:47:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:48.570   13:47:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:48.570    13:47:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:48.570    13:47:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:48.570    13:47:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:48.570    13:47:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:48.570   13:47:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:48.570  {
00:21:48.570  "cntlid": 57,
00:21:48.570  "qid": 0,
00:21:48.570  "state": "enabled",
00:21:48.570  "thread": "nvmf_tgt_poll_group_000",
00:21:48.570  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:21:48.570  "listen_address": {
00:21:48.570  "trtype": "RDMA",
00:21:48.570  "adrfam": "IPv4",
00:21:48.570  "traddr": "192.168.100.8",
00:21:48.570  "trsvcid": "4420"
00:21:48.570  },
00:21:48.570  "peer_address": {
00:21:48.570  "trtype": "RDMA",
00:21:48.570  "adrfam": "IPv4",
00:21:48.570  "traddr": "192.168.100.8",
00:21:48.570  "trsvcid": "56526"
00:21:48.570  },
00:21:48.570  "auth": {
00:21:48.570  "state": "completed",
00:21:48.570  "digest": "sha384",
00:21:48.570  "dhgroup": "ffdhe2048"
00:21:48.570  }
00:21:48.570  }
00:21:48.570  ]'
00:21:48.570    13:47:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:48.570   13:47:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:21:48.570    13:47:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:48.570   13:47:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]]
00:21:48.570    13:47:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:48.570   13:47:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:48.570   13:47:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:48.570   13:47:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:48.830   13:47:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjMwMWZmNWI5YmY1NDQ3ZDU2ZmYyYzM4ZTQxYjczZGJhYzcwZTE2ZDBmOTE5MTM0FgVPJA==: --dhchap-ctrl-secret DHHC-1:03:MDIyYTIwZDcxMDg5MTY4YTQ0NWYwYzE5YzJkYmE1NGU3MTcxNDI4YTdlMTJhMDA5NzgzZmNkOTE5OTA0OWNmOY86vbo=:
00:21:48.830   13:47:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:ZjMwMWZmNWI5YmY1NDQ3ZDU2ZmYyYzM4ZTQxYjczZGJhYzcwZTE2ZDBmOTE5MTM0FgVPJA==: --dhchap-ctrl-secret DHHC-1:03:MDIyYTIwZDcxMDg5MTY4YTQ0NWYwYzE5YzJkYmE1NGU3MTcxNDI4YTdlMTJhMDA5NzgzZmNkOTE5OTA0OWNmOY86vbo=:
00:21:49.399   13:47:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:49.658  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:49.658   13:47:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:21:49.658   13:47:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:49.658   13:47:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:49.658   13:47:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:49.658   13:47:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:49.658   13:47:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048
00:21:49.658   13:47:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048
00:21:49.918   13:47:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1
00:21:49.918   13:47:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:49.918   13:47:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:21:49.918   13:47:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048
00:21:49.918   13:47:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:21:49.918   13:47:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:49.918   13:47:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:49.918   13:47:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:49.918   13:47:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:49.918   13:47:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:49.918   13:47:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:49.918   13:47:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:49.918   13:47:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:50.177  
00:21:50.177    13:47:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:50.177    13:47:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:50.177    13:47:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:50.177   13:47:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:50.437    13:47:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:50.437    13:47:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:50.437    13:47:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:50.437    13:47:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:50.437   13:47:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:50.437  {
00:21:50.437  "cntlid": 59,
00:21:50.437  "qid": 0,
00:21:50.437  "state": "enabled",
00:21:50.437  "thread": "nvmf_tgt_poll_group_000",
00:21:50.437  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:21:50.437  "listen_address": {
00:21:50.437  "trtype": "RDMA",
00:21:50.437  "adrfam": "IPv4",
00:21:50.437  "traddr": "192.168.100.8",
00:21:50.437  "trsvcid": "4420"
00:21:50.437  },
00:21:50.437  "peer_address": {
00:21:50.437  "trtype": "RDMA",
00:21:50.437  "adrfam": "IPv4",
00:21:50.437  "traddr": "192.168.100.8",
00:21:50.437  "trsvcid": "42667"
00:21:50.437  },
00:21:50.437  "auth": {
00:21:50.437  "state": "completed",
00:21:50.437  "digest": "sha384",
00:21:50.437  "dhgroup": "ffdhe2048"
00:21:50.437  }
00:21:50.437  }
00:21:50.437  ]'
00:21:50.437    13:47:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:50.437   13:47:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:21:50.437    13:47:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:50.437   13:47:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]]
00:21:50.437    13:47:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:50.437   13:47:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:50.437   13:47:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:50.437   13:47:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:50.696   13:47:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg4OWVkZGIwZGRiZDNlNmNkZGRkYmYyNTRjZTFhZDaPKOr9: --dhchap-ctrl-secret DHHC-1:02:ZjY3MzRkZjJjMDM5MzIxMzUzZjliOTczNjVhNjUzMjA0YWVjYWEwMTIwZjNkM2Q17wM3AA==:
00:21:50.696   13:47:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:Mjg4OWVkZGIwZGRiZDNlNmNkZGRkYmYyNTRjZTFhZDaPKOr9: --dhchap-ctrl-secret DHHC-1:02:ZjY3MzRkZjJjMDM5MzIxMzUzZjliOTczNjVhNjUzMjA0YWVjYWEwMTIwZjNkM2Q17wM3AA==:
00:21:51.266   13:47:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:51.266  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:51.266   13:47:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:21:51.266   13:47:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:51.266   13:47:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:51.266   13:47:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:51.266   13:47:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:51.266   13:47:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048
00:21:51.266   13:47:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048
00:21:51.525   13:47:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2
00:21:51.525   13:47:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:51.525   13:47:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:21:51.525   13:47:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048
00:21:51.525   13:47:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:21:51.525   13:47:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:51.525   13:47:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:51.525   13:47:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:51.525   13:47:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:51.525   13:47:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:51.525   13:47:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:51.525   13:47:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:51.525   13:47:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:51.785  
00:21:51.785    13:47:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:51.785    13:47:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:51.785    13:47:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:52.044   13:47:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:52.044    13:47:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:52.044    13:47:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:52.044    13:47:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:52.044    13:47:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:52.044   13:47:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:52.044  {
00:21:52.044  "cntlid": 61,
00:21:52.044  "qid": 0,
00:21:52.044  "state": "enabled",
00:21:52.044  "thread": "nvmf_tgt_poll_group_000",
00:21:52.044  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:21:52.044  "listen_address": {
00:21:52.044  "trtype": "RDMA",
00:21:52.044  "adrfam": "IPv4",
00:21:52.044  "traddr": "192.168.100.8",
00:21:52.044  "trsvcid": "4420"
00:21:52.044  },
00:21:52.044  "peer_address": {
00:21:52.044  "trtype": "RDMA",
00:21:52.044  "adrfam": "IPv4",
00:21:52.044  "traddr": "192.168.100.8",
00:21:52.044  "trsvcid": "43575"
00:21:52.044  },
00:21:52.044  "auth": {
00:21:52.044  "state": "completed",
00:21:52.044  "digest": "sha384",
00:21:52.044  "dhgroup": "ffdhe2048"
00:21:52.044  }
00:21:52.044  }
00:21:52.044  ]'
00:21:52.044    13:47:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:52.044   13:47:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:21:52.044    13:47:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:52.044   13:47:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]]
00:21:52.044    13:47:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:52.303   13:47:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:52.303   13:47:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:52.303   13:47:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:52.303   13:47:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjM4MWU5M2YzYjVjZDQzMTcwM2YxMzliNDdiYjc0ODQ5MmQ0OTBiODBiNzZhNzIwxZi5CA==: --dhchap-ctrl-secret DHHC-1:01:ODExNTY5YTA4ODM1OGNmN2ViMzE0MjM2OThkZjk3MTCjwNhC:
00:21:52.303   13:47:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YjM4MWU5M2YzYjVjZDQzMTcwM2YxMzliNDdiYjc0ODQ5MmQ0OTBiODBiNzZhNzIwxZi5CA==: --dhchap-ctrl-secret DHHC-1:01:ODExNTY5YTA4ODM1OGNmN2ViMzE0MjM2OThkZjk3MTCjwNhC:
00:21:53.241   13:47:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:53.241  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:53.241   13:47:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:21:53.241   13:47:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:53.241   13:47:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:53.241   13:47:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:53.241   13:47:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:53.242   13:47:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048
00:21:53.242   13:47:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048
00:21:53.242   13:47:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3
00:21:53.242   13:47:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:53.242   13:47:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:21:53.242   13:47:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048
00:21:53.242   13:47:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:21:53.242   13:47:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:53.242   13:47:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3
00:21:53.242   13:47:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:53.242   13:47:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:53.501   13:47:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:53.501   13:47:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:21:53.501   13:47:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:21:53.501   13:47:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:21:53.501  
00:21:53.760    13:47:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:53.760    13:47:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:53.760    13:47:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:53.760   13:47:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:53.760    13:47:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:53.760    13:47:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:53.760    13:47:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:53.760    13:47:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:53.760   13:47:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:53.760  {
00:21:53.760  "cntlid": 63,
00:21:53.760  "qid": 0,
00:21:53.760  "state": "enabled",
00:21:53.760  "thread": "nvmf_tgt_poll_group_000",
00:21:53.760  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:21:53.760  "listen_address": {
00:21:53.760  "trtype": "RDMA",
00:21:53.760  "adrfam": "IPv4",
00:21:53.760  "traddr": "192.168.100.8",
00:21:53.760  "trsvcid": "4420"
00:21:53.760  },
00:21:53.760  "peer_address": {
00:21:53.760  "trtype": "RDMA",
00:21:53.760  "adrfam": "IPv4",
00:21:53.760  "traddr": "192.168.100.8",
00:21:53.760  "trsvcid": "56540"
00:21:53.760  },
00:21:53.760  "auth": {
00:21:53.760  "state": "completed",
00:21:53.760  "digest": "sha384",
00:21:53.760  "dhgroup": "ffdhe2048"
00:21:53.760  }
00:21:53.760  }
00:21:53.760  ]'
00:21:53.760    13:47:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:53.760   13:47:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:21:53.760    13:47:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:54.020   13:47:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]]
00:21:54.020    13:47:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:54.020   13:47:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:54.020   13:47:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:54.020   13:47:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:54.279   13:47:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODEyZmE1YmJhN2Q1YjExMGE2NTcxNWJkNjcwMmE0NGM2MWQzMDQ1NmE2MWMwZmJlYzE4OTM4MzkzZjZjNjQwMgE7H48=:
00:21:54.279   13:47:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:ODEyZmE1YmJhN2Q1YjExMGE2NTcxNWJkNjcwMmE0NGM2MWQzMDQ1NmE2MWMwZmJlYzE4OTM4MzkzZjZjNjQwMgE7H48=:
00:21:54.848   13:47:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:54.848  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:54.848   13:47:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:21:54.848   13:47:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:54.848   13:47:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:54.848   13:47:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:54.848   13:47:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:21:54.848   13:47:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:54.848   13:47:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072
00:21:54.848   13:47:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072
00:21:55.108   13:47:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0
00:21:55.108   13:47:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:55.108   13:47:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:21:55.108   13:47:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072
00:21:55.108   13:47:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:21:55.108   13:47:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:55.108   13:47:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:55.108   13:47:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:55.108   13:47:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:55.108   13:47:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:55.108   13:47:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:55.108   13:47:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:55.108   13:47:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:55.367  
00:21:55.367    13:47:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:55.367    13:47:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:55.367    13:47:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:55.626   13:47:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:55.626    13:47:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:55.626    13:47:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:55.626    13:47:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:55.626    13:47:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:55.626   13:47:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:55.626  {
00:21:55.626  "cntlid": 65,
00:21:55.626  "qid": 0,
00:21:55.626  "state": "enabled",
00:21:55.626  "thread": "nvmf_tgt_poll_group_000",
00:21:55.626  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:21:55.626  "listen_address": {
00:21:55.626  "trtype": "RDMA",
00:21:55.626  "adrfam": "IPv4",
00:21:55.626  "traddr": "192.168.100.8",
00:21:55.626  "trsvcid": "4420"
00:21:55.626  },
00:21:55.626  "peer_address": {
00:21:55.626  "trtype": "RDMA",
00:21:55.626  "adrfam": "IPv4",
00:21:55.626  "traddr": "192.168.100.8",
00:21:55.626  "trsvcid": "48680"
00:21:55.626  },
00:21:55.626  "auth": {
00:21:55.626  "state": "completed",
00:21:55.626  "digest": "sha384",
00:21:55.626  "dhgroup": "ffdhe3072"
00:21:55.626  }
00:21:55.626  }
00:21:55.626  ]'
00:21:55.626    13:47:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:55.626   13:47:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:21:55.626    13:47:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:55.626   13:47:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]]
00:21:55.626    13:47:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:55.626   13:47:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:55.626   13:47:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:55.626   13:47:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:55.884   13:47:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjMwMWZmNWI5YmY1NDQ3ZDU2ZmYyYzM4ZTQxYjczZGJhYzcwZTE2ZDBmOTE5MTM0FgVPJA==: --dhchap-ctrl-secret DHHC-1:03:MDIyYTIwZDcxMDg5MTY4YTQ0NWYwYzE5YzJkYmE1NGU3MTcxNDI4YTdlMTJhMDA5NzgzZmNkOTE5OTA0OWNmOY86vbo=:
00:21:55.885   13:47:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:ZjMwMWZmNWI5YmY1NDQ3ZDU2ZmYyYzM4ZTQxYjczZGJhYzcwZTE2ZDBmOTE5MTM0FgVPJA==: --dhchap-ctrl-secret DHHC-1:03:MDIyYTIwZDcxMDg5MTY4YTQ0NWYwYzE5YzJkYmE1NGU3MTcxNDI4YTdlMTJhMDA5NzgzZmNkOTE5OTA0OWNmOY86vbo=:
00:21:56.453   13:47:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:56.712  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:56.712   13:47:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:21:56.712   13:47:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:56.712   13:47:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:56.713   13:47:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:56.713   13:47:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:56.713   13:47:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072
00:21:56.713   13:47:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072
00:21:56.713   13:47:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1
00:21:56.713   13:47:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:56.713   13:47:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:21:56.713   13:47:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072
00:21:56.713   13:47:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:21:56.713   13:47:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:56.713   13:47:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:56.713   13:47:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:56.713   13:47:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:56.972   13:47:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:56.972   13:47:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:56.972   13:47:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:56.972   13:47:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:57.232  
00:21:57.232    13:47:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:57.232    13:47:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:57.232    13:47:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:57.232   13:47:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:57.232    13:47:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:57.232    13:47:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:57.232    13:47:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:57.232    13:47:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:57.232   13:47:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:57.232  {
00:21:57.232  "cntlid": 67,
00:21:57.232  "qid": 0,
00:21:57.232  "state": "enabled",
00:21:57.232  "thread": "nvmf_tgt_poll_group_000",
00:21:57.232  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:21:57.232  "listen_address": {
00:21:57.232  "trtype": "RDMA",
00:21:57.232  "adrfam": "IPv4",
00:21:57.232  "traddr": "192.168.100.8",
00:21:57.232  "trsvcid": "4420"
00:21:57.232  },
00:21:57.232  "peer_address": {
00:21:57.232  "trtype": "RDMA",
00:21:57.232  "adrfam": "IPv4",
00:21:57.232  "traddr": "192.168.100.8",
00:21:57.232  "trsvcid": "39663"
00:21:57.232  },
00:21:57.232  "auth": {
00:21:57.232  "state": "completed",
00:21:57.232  "digest": "sha384",
00:21:57.232  "dhgroup": "ffdhe3072"
00:21:57.232  }
00:21:57.232  }
00:21:57.232  ]'
00:21:57.232    13:47:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:57.491   13:47:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:21:57.491    13:47:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:57.491   13:47:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]]
00:21:57.491    13:47:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:57.491   13:47:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:57.491   13:47:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:57.491   13:47:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:57.751   13:47:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg4OWVkZGIwZGRiZDNlNmNkZGRkYmYyNTRjZTFhZDaPKOr9: --dhchap-ctrl-secret DHHC-1:02:ZjY3MzRkZjJjMDM5MzIxMzUzZjliOTczNjVhNjUzMjA0YWVjYWEwMTIwZjNkM2Q17wM3AA==:
00:21:57.751   13:47:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:Mjg4OWVkZGIwZGRiZDNlNmNkZGRkYmYyNTRjZTFhZDaPKOr9: --dhchap-ctrl-secret DHHC-1:02:ZjY3MzRkZjJjMDM5MzIxMzUzZjliOTczNjVhNjUzMjA0YWVjYWEwMTIwZjNkM2Q17wM3AA==:
00:21:58.320   13:47:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:58.320  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:58.320   13:47:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:21:58.320   13:47:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:58.320   13:47:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:58.320   13:47:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:58.320   13:47:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:58.320   13:47:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072
00:21:58.320   13:47:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072
00:21:58.580   13:47:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2
00:21:58.580   13:47:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:58.580   13:47:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:21:58.580   13:47:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072
00:21:58.580   13:47:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:21:58.580   13:47:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:58.580   13:47:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:58.580   13:47:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:58.580   13:47:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:58.580   13:47:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:58.580   13:47:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:58.580   13:47:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:58.580   13:47:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:58.839  
00:21:58.839    13:47:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:58.839    13:47:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:58.839    13:47:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:59.098   13:47:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:59.098    13:47:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:59.099    13:47:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:59.099    13:47:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:59.099    13:47:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:59.099   13:47:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:59.099  {
00:21:59.099  "cntlid": 69,
00:21:59.099  "qid": 0,
00:21:59.099  "state": "enabled",
00:21:59.099  "thread": "nvmf_tgt_poll_group_000",
00:21:59.099  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:21:59.099  "listen_address": {
00:21:59.099  "trtype": "RDMA",
00:21:59.099  "adrfam": "IPv4",
00:21:59.099  "traddr": "192.168.100.8",
00:21:59.099  "trsvcid": "4420"
00:21:59.099  },
00:21:59.099  "peer_address": {
00:21:59.099  "trtype": "RDMA",
00:21:59.099  "adrfam": "IPv4",
00:21:59.099  "traddr": "192.168.100.8",
00:21:59.099  "trsvcid": "52886"
00:21:59.099  },
00:21:59.099  "auth": {
00:21:59.099  "state": "completed",
00:21:59.099  "digest": "sha384",
00:21:59.099  "dhgroup": "ffdhe3072"
00:21:59.099  }
00:21:59.099  }
00:21:59.099  ]'
00:21:59.099    13:47:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:59.099   13:47:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:21:59.099    13:47:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:59.099   13:47:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]]
00:21:59.099    13:47:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:59.099   13:47:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:59.099   13:47:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:59.099   13:47:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:59.358   13:47:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjM4MWU5M2YzYjVjZDQzMTcwM2YxMzliNDdiYjc0ODQ5MmQ0OTBiODBiNzZhNzIwxZi5CA==: --dhchap-ctrl-secret DHHC-1:01:ODExNTY5YTA4ODM1OGNmN2ViMzE0MjM2OThkZjk3MTCjwNhC:
00:21:59.358   13:47:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YjM4MWU5M2YzYjVjZDQzMTcwM2YxMzliNDdiYjc0ODQ5MmQ0OTBiODBiNzZhNzIwxZi5CA==: --dhchap-ctrl-secret DHHC-1:01:ODExNTY5YTA4ODM1OGNmN2ViMzE0MjM2OThkZjk3MTCjwNhC:
00:21:59.928   13:47:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:22:00.186  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:22:00.186   13:47:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:22:00.186   13:47:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:00.186   13:47:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:00.186   13:47:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:00.186   13:47:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:22:00.187   13:47:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072
00:22:00.187   13:47:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072
00:22:00.446   13:47:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3
00:22:00.446   13:47:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:22:00.446   13:47:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:22:00.446   13:47:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072
00:22:00.446   13:47:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:22:00.446   13:47:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:22:00.446   13:47:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3
00:22:00.446   13:47:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:00.446   13:47:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:00.446   13:47:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:00.446   13:47:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:22:00.446   13:47:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:22:00.446   13:47:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:22:00.719  
00:22:00.719    13:48:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:22:00.719    13:48:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:22:00.719    13:48:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:22:00.719   13:48:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:22:00.719    13:48:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:22:00.719    13:48:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:00.719    13:48:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:00.719    13:48:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:00.719   13:48:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:22:00.719  {
00:22:00.719  "cntlid": 71,
00:22:00.719  "qid": 0,
00:22:00.719  "state": "enabled",
00:22:00.719  "thread": "nvmf_tgt_poll_group_000",
00:22:00.719  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:22:00.719  "listen_address": {
00:22:00.719  "trtype": "RDMA",
00:22:00.719  "adrfam": "IPv4",
00:22:00.719  "traddr": "192.168.100.8",
00:22:00.719  "trsvcid": "4420"
00:22:00.719  },
00:22:00.719  "peer_address": {
00:22:00.719  "trtype": "RDMA",
00:22:00.719  "adrfam": "IPv4",
00:22:00.719  "traddr": "192.168.100.8",
00:22:00.719  "trsvcid": "43217"
00:22:00.719  },
00:22:00.719  "auth": {
00:22:00.719  "state": "completed",
00:22:00.719  "digest": "sha384",
00:22:00.719  "dhgroup": "ffdhe3072"
00:22:00.719  }
00:22:00.719  }
00:22:00.719  ]'
00:22:00.719    13:48:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:22:00.978   13:48:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:22:00.978    13:48:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:22:00.978   13:48:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]]
00:22:00.978    13:48:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:22:00.978   13:48:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:22:00.978   13:48:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:22:00.978   13:48:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:22:01.237   13:48:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODEyZmE1YmJhN2Q1YjExMGE2NTcxNWJkNjcwMmE0NGM2MWQzMDQ1NmE2MWMwZmJlYzE4OTM4MzkzZjZjNjQwMgE7H48=:
00:22:01.237   13:48:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:ODEyZmE1YmJhN2Q1YjExMGE2NTcxNWJkNjcwMmE0NGM2MWQzMDQ1NmE2MWMwZmJlYzE4OTM4MzkzZjZjNjQwMgE7H48=:
00:22:01.806   13:48:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:22:01.806  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:22:01.806   13:48:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:22:01.806   13:48:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:01.806   13:48:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:01.806   13:48:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:01.806   13:48:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:22:01.806   13:48:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:22:01.806   13:48:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096
00:22:01.806   13:48:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096
00:22:02.065   13:48:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0
00:22:02.065   13:48:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:22:02.065   13:48:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:22:02.065   13:48:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096
00:22:02.065   13:48:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:22:02.065   13:48:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:22:02.065   13:48:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:22:02.065   13:48:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:02.065   13:48:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:02.065   13:48:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:02.065   13:48:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:22:02.065   13:48:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:22:02.065   13:48:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:22:02.324  
00:22:02.324    13:48:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:22:02.324    13:48:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:22:02.324    13:48:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:22:02.583   13:48:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:22:02.583    13:48:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:22:02.583    13:48:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:02.583    13:48:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:02.583    13:48:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:02.583   13:48:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:22:02.583  {
00:22:02.583  "cntlid": 73,
00:22:02.583  "qid": 0,
00:22:02.583  "state": "enabled",
00:22:02.583  "thread": "nvmf_tgt_poll_group_000",
00:22:02.583  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:22:02.583  "listen_address": {
00:22:02.583  "trtype": "RDMA",
00:22:02.583  "adrfam": "IPv4",
00:22:02.583  "traddr": "192.168.100.8",
00:22:02.583  "trsvcid": "4420"
00:22:02.583  },
00:22:02.583  "peer_address": {
00:22:02.583  "trtype": "RDMA",
00:22:02.583  "adrfam": "IPv4",
00:22:02.583  "traddr": "192.168.100.8",
00:22:02.583  "trsvcid": "33282"
00:22:02.583  },
00:22:02.583  "auth": {
00:22:02.583  "state": "completed",
00:22:02.583  "digest": "sha384",
00:22:02.583  "dhgroup": "ffdhe4096"
00:22:02.583  }
00:22:02.583  }
00:22:02.583  ]'
00:22:02.583    13:48:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:22:02.583   13:48:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:22:02.583    13:48:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:22:02.583   13:48:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]]
00:22:02.583    13:48:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:22:02.843   13:48:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:22:02.843   13:48:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:22:02.843   13:48:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:22:02.843   13:48:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjMwMWZmNWI5YmY1NDQ3ZDU2ZmYyYzM4ZTQxYjczZGJhYzcwZTE2ZDBmOTE5MTM0FgVPJA==: --dhchap-ctrl-secret DHHC-1:03:MDIyYTIwZDcxMDg5MTY4YTQ0NWYwYzE5YzJkYmE1NGU3MTcxNDI4YTdlMTJhMDA5NzgzZmNkOTE5OTA0OWNmOY86vbo=:
00:22:02.843   13:48:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:ZjMwMWZmNWI5YmY1NDQ3ZDU2ZmYyYzM4ZTQxYjczZGJhYzcwZTE2ZDBmOTE5MTM0FgVPJA==: --dhchap-ctrl-secret DHHC-1:03:MDIyYTIwZDcxMDg5MTY4YTQ0NWYwYzE5YzJkYmE1NGU3MTcxNDI4YTdlMTJhMDA5NzgzZmNkOTE5OTA0OWNmOY86vbo=:
00:22:03.412   13:48:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:22:03.671  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:22:03.671   13:48:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:22:03.671   13:48:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:03.671   13:48:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:03.671   13:48:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:03.671   13:48:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:22:03.671   13:48:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096
00:22:03.671   13:48:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096
00:22:03.930   13:48:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1
00:22:03.930   13:48:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:22:03.930   13:48:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:22:03.930   13:48:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096
00:22:03.930   13:48:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:22:03.930   13:48:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:22:03.930   13:48:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:22:03.930   13:48:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:03.930   13:48:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:03.930   13:48:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:03.930   13:48:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:22:03.930   13:48:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:22:03.930   13:48:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:22:04.189  
00:22:04.189    13:48:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:22:04.189    13:48:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:22:04.189    13:48:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:22:04.449   13:48:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:22:04.449    13:48:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:22:04.449    13:48:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:04.449    13:48:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:04.449    13:48:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:04.449   13:48:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:22:04.449  {
00:22:04.449  "cntlid": 75,
00:22:04.449  "qid": 0,
00:22:04.449  "state": "enabled",
00:22:04.449  "thread": "nvmf_tgt_poll_group_000",
00:22:04.449  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:22:04.449  "listen_address": {
00:22:04.449  "trtype": "RDMA",
00:22:04.449  "adrfam": "IPv4",
00:22:04.449  "traddr": "192.168.100.8",
00:22:04.449  "trsvcid": "4420"
00:22:04.449  },
00:22:04.449  "peer_address": {
00:22:04.449  "trtype": "RDMA",
00:22:04.449  "adrfam": "IPv4",
00:22:04.449  "traddr": "192.168.100.8",
00:22:04.449  "trsvcid": "55936"
00:22:04.449  },
00:22:04.449  "auth": {
00:22:04.449  "state": "completed",
00:22:04.449  "digest": "sha384",
00:22:04.449  "dhgroup": "ffdhe4096"
00:22:04.449  }
00:22:04.449  }
00:22:04.449  ]'
00:22:04.449    13:48:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:22:04.449   13:48:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:22:04.449    13:48:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:22:04.449   13:48:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]]
00:22:04.449    13:48:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:22:04.449   13:48:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:22:04.449   13:48:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:22:04.449   13:48:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:22:04.709   13:48:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg4OWVkZGIwZGRiZDNlNmNkZGRkYmYyNTRjZTFhZDaPKOr9: --dhchap-ctrl-secret DHHC-1:02:ZjY3MzRkZjJjMDM5MzIxMzUzZjliOTczNjVhNjUzMjA0YWVjYWEwMTIwZjNkM2Q17wM3AA==:
00:22:04.709   13:48:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:Mjg4OWVkZGIwZGRiZDNlNmNkZGRkYmYyNTRjZTFhZDaPKOr9: --dhchap-ctrl-secret DHHC-1:02:ZjY3MzRkZjJjMDM5MzIxMzUzZjliOTczNjVhNjUzMjA0YWVjYWEwMTIwZjNkM2Q17wM3AA==:
00:22:05.277   13:48:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:22:05.277  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:22:05.278   13:48:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:22:05.278   13:48:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:05.278   13:48:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:05.538   13:48:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:05.538   13:48:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:22:05.538   13:48:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096
00:22:05.538   13:48:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096
00:22:05.539   13:48:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2
00:22:05.539   13:48:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:22:05.539   13:48:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:22:05.539   13:48:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096
00:22:05.539   13:48:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:22:05.539   13:48:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:22:05.539   13:48:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:22:05.539   13:48:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:05.539   13:48:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:05.539   13:48:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:05.539   13:48:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:22:05.539   13:48:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:22:05.539   13:48:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:22:05.798  
00:22:05.798    13:48:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:22:05.798    13:48:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:22:05.798    13:48:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:22:06.058   13:48:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:22:06.058    13:48:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:22:06.058    13:48:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:06.058    13:48:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:06.058    13:48:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:06.058   13:48:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:22:06.058  {
00:22:06.058  "cntlid": 77,
00:22:06.058  "qid": 0,
00:22:06.058  "state": "enabled",
00:22:06.058  "thread": "nvmf_tgt_poll_group_000",
00:22:06.058  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:22:06.058  "listen_address": {
00:22:06.058  "trtype": "RDMA",
00:22:06.058  "adrfam": "IPv4",
00:22:06.058  "traddr": "192.168.100.8",
00:22:06.058  "trsvcid": "4420"
00:22:06.058  },
00:22:06.058  "peer_address": {
00:22:06.058  "trtype": "RDMA",
00:22:06.058  "adrfam": "IPv4",
00:22:06.058  "traddr": "192.168.100.8",
00:22:06.058  "trsvcid": "54432"
00:22:06.058  },
00:22:06.058  "auth": {
00:22:06.058  "state": "completed",
00:22:06.058  "digest": "sha384",
00:22:06.058  "dhgroup": "ffdhe4096"
00:22:06.058  }
00:22:06.058  }
00:22:06.058  ]'
00:22:06.058    13:48:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:22:06.058   13:48:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:22:06.058    13:48:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:22:06.333   13:48:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]]
00:22:06.333    13:48:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:22:06.333   13:48:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:22:06.333   13:48:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:22:06.333   13:48:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:22:06.333   13:48:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjM4MWU5M2YzYjVjZDQzMTcwM2YxMzliNDdiYjc0ODQ5MmQ0OTBiODBiNzZhNzIwxZi5CA==: --dhchap-ctrl-secret DHHC-1:01:ODExNTY5YTA4ODM1OGNmN2ViMzE0MjM2OThkZjk3MTCjwNhC:
00:22:06.333   13:48:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YjM4MWU5M2YzYjVjZDQzMTcwM2YxMzliNDdiYjc0ODQ5MmQ0OTBiODBiNzZhNzIwxZi5CA==: --dhchap-ctrl-secret DHHC-1:01:ODExNTY5YTA4ODM1OGNmN2ViMzE0MjM2OThkZjk3MTCjwNhC:
00:22:07.272   13:48:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:22:07.272  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:22:07.272   13:48:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:22:07.272   13:48:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:07.272   13:48:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:07.272   13:48:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:07.272   13:48:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:22:07.272   13:48:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096
00:22:07.272   13:48:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096
00:22:07.272   13:48:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3
00:22:07.272   13:48:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:22:07.272   13:48:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:22:07.272   13:48:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096
00:22:07.272   13:48:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:22:07.272   13:48:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:22:07.272   13:48:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3
00:22:07.272   13:48:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:07.272   13:48:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:07.272   13:48:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:07.272   13:48:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:22:07.272   13:48:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:22:07.272   13:48:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:22:07.841  
00:22:07.841    13:48:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:22:07.841    13:48:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:22:07.841    13:48:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:22:07.841   13:48:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:22:07.841    13:48:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:22:07.841    13:48:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:07.841    13:48:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:07.841    13:48:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:07.841   13:48:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:22:07.841  {
00:22:07.841  "cntlid": 79,
00:22:07.841  "qid": 0,
00:22:07.841  "state": "enabled",
00:22:07.841  "thread": "nvmf_tgt_poll_group_000",
00:22:07.841  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:22:07.841  "listen_address": {
00:22:07.841  "trtype": "RDMA",
00:22:07.841  "adrfam": "IPv4",
00:22:07.841  "traddr": "192.168.100.8",
00:22:07.841  "trsvcid": "4420"
00:22:07.841  },
00:22:07.841  "peer_address": {
00:22:07.841  "trtype": "RDMA",
00:22:07.841  "adrfam": "IPv4",
00:22:07.841  "traddr": "192.168.100.8",
00:22:07.841  "trsvcid": "44283"
00:22:07.841  },
00:22:07.841  "auth": {
00:22:07.841  "state": "completed",
00:22:07.841  "digest": "sha384",
00:22:07.841  "dhgroup": "ffdhe4096"
00:22:07.841  }
00:22:07.841  }
00:22:07.841  ]'
00:22:07.841    13:48:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:22:07.841   13:48:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:22:07.841    13:48:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:22:08.118   13:48:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]]
00:22:08.118    13:48:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:22:08.118   13:48:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:22:08.118   13:48:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:22:08.118   13:48:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:22:08.118   13:48:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODEyZmE1YmJhN2Q1YjExMGE2NTcxNWJkNjcwMmE0NGM2MWQzMDQ1NmE2MWMwZmJlYzE4OTM4MzkzZjZjNjQwMgE7H48=:
00:22:08.118   13:48:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:ODEyZmE1YmJhN2Q1YjExMGE2NTcxNWJkNjcwMmE0NGM2MWQzMDQ1NmE2MWMwZmJlYzE4OTM4MzkzZjZjNjQwMgE7H48=:
00:22:08.818   13:48:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:22:08.818  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:22:08.818   13:48:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:22:08.818   13:48:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:08.818   13:48:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:08.818   13:48:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:08.818   13:48:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:22:08.818   13:48:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:22:08.818   13:48:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144
00:22:08.818   13:48:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144
00:22:09.077   13:48:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0
00:22:09.077   13:48:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:22:09.077   13:48:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:22:09.077   13:48:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144
00:22:09.077   13:48:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:22:09.077   13:48:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:22:09.078   13:48:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:22:09.078   13:48:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:09.078   13:48:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:09.078   13:48:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:09.078   13:48:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:22:09.078   13:48:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:22:09.078   13:48:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:22:09.647  
00:22:09.647    13:48:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:22:09.647    13:48:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:22:09.647    13:48:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:22:09.647   13:48:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:22:09.647    13:48:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:22:09.647    13:48:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:09.647    13:48:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:09.647    13:48:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:09.647   13:48:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:22:09.647  {
00:22:09.647  "cntlid": 81,
00:22:09.647  "qid": 0,
00:22:09.647  "state": "enabled",
00:22:09.647  "thread": "nvmf_tgt_poll_group_000",
00:22:09.647  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:22:09.647  "listen_address": {
00:22:09.647  "trtype": "RDMA",
00:22:09.647  "adrfam": "IPv4",
00:22:09.647  "traddr": "192.168.100.8",
00:22:09.647  "trsvcid": "4420"
00:22:09.647  },
00:22:09.647  "peer_address": {
00:22:09.647  "trtype": "RDMA",
00:22:09.647  "adrfam": "IPv4",
00:22:09.647  "traddr": "192.168.100.8",
00:22:09.647  "trsvcid": "47601"
00:22:09.647  },
00:22:09.647  "auth": {
00:22:09.647  "state": "completed",
00:22:09.647  "digest": "sha384",
00:22:09.647  "dhgroup": "ffdhe6144"
00:22:09.647  }
00:22:09.647  }
00:22:09.647  ]'
00:22:09.647    13:48:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:22:09.647   13:48:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:22:09.647    13:48:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:22:09.647   13:48:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]]
00:22:09.907    13:48:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:22:09.907   13:48:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:22:09.907   13:48:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:22:09.907   13:48:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:22:10.166   13:48:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjMwMWZmNWI5YmY1NDQ3ZDU2ZmYyYzM4ZTQxYjczZGJhYzcwZTE2ZDBmOTE5MTM0FgVPJA==: --dhchap-ctrl-secret DHHC-1:03:MDIyYTIwZDcxMDg5MTY4YTQ0NWYwYzE5YzJkYmE1NGU3MTcxNDI4YTdlMTJhMDA5NzgzZmNkOTE5OTA0OWNmOY86vbo=:
00:22:10.166   13:48:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:ZjMwMWZmNWI5YmY1NDQ3ZDU2ZmYyYzM4ZTQxYjczZGJhYzcwZTE2ZDBmOTE5MTM0FgVPJA==: --dhchap-ctrl-secret DHHC-1:03:MDIyYTIwZDcxMDg5MTY4YTQ0NWYwYzE5YzJkYmE1NGU3MTcxNDI4YTdlMTJhMDA5NzgzZmNkOTE5OTA0OWNmOY86vbo=:
00:22:10.734   13:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:22:10.734  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:22:10.734   13:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:22:10.734   13:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:10.734   13:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:10.734   13:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:10.734   13:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:22:10.734   13:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144
00:22:10.734   13:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144
00:22:10.993   13:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1
00:22:10.993   13:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:22:10.993   13:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:22:10.993   13:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144
00:22:10.993   13:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:22:10.993   13:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:22:10.993   13:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:22:10.993   13:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:10.993   13:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:10.993   13:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:10.993   13:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:22:10.993   13:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:22:10.993   13:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:22:11.253  
00:22:11.253    13:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:22:11.253    13:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:22:11.253    13:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:22:11.512   13:48:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:22:11.512    13:48:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:22:11.512    13:48:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:11.512    13:48:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:11.512    13:48:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:11.512   13:48:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:22:11.512  {
00:22:11.512  "cntlid": 83,
00:22:11.512  "qid": 0,
00:22:11.512  "state": "enabled",
00:22:11.512  "thread": "nvmf_tgt_poll_group_000",
00:22:11.512  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:22:11.512  "listen_address": {
00:22:11.512  "trtype": "RDMA",
00:22:11.512  "adrfam": "IPv4",
00:22:11.512  "traddr": "192.168.100.8",
00:22:11.512  "trsvcid": "4420"
00:22:11.512  },
00:22:11.512  "peer_address": {
00:22:11.512  "trtype": "RDMA",
00:22:11.512  "adrfam": "IPv4",
00:22:11.512  "traddr": "192.168.100.8",
00:22:11.512  "trsvcid": "48668"
00:22:11.512  },
00:22:11.512  "auth": {
00:22:11.512  "state": "completed",
00:22:11.512  "digest": "sha384",
00:22:11.512  "dhgroup": "ffdhe6144"
00:22:11.512  }
00:22:11.512  }
00:22:11.512  ]'
00:22:11.512    13:48:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:22:11.513   13:48:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:22:11.513    13:48:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:22:11.772   13:48:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]]
00:22:11.772    13:48:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:22:11.772   13:48:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:22:11.772   13:48:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:22:11.772   13:48:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:22:11.772   13:48:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg4OWVkZGIwZGRiZDNlNmNkZGRkYmYyNTRjZTFhZDaPKOr9: --dhchap-ctrl-secret DHHC-1:02:ZjY3MzRkZjJjMDM5MzIxMzUzZjliOTczNjVhNjUzMjA0YWVjYWEwMTIwZjNkM2Q17wM3AA==:
00:22:11.772   13:48:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:Mjg4OWVkZGIwZGRiZDNlNmNkZGRkYmYyNTRjZTFhZDaPKOr9: --dhchap-ctrl-secret DHHC-1:02:ZjY3MzRkZjJjMDM5MzIxMzUzZjliOTczNjVhNjUzMjA0YWVjYWEwMTIwZjNkM2Q17wM3AA==:
00:22:12.710   13:48:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:22:12.710  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:22:12.710   13:48:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:22:12.710   13:48:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:12.710   13:48:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:12.710   13:48:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:12.710   13:48:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:22:12.710   13:48:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144
00:22:12.710   13:48:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144
00:22:12.970   13:48:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2
00:22:12.970   13:48:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:22:12.970   13:48:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:22:12.970   13:48:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144
00:22:12.970   13:48:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:22:12.970   13:48:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:22:12.970   13:48:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:22:12.970   13:48:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:12.970   13:48:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:12.970   13:48:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:12.970   13:48:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:22:12.970   13:48:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:22:12.970   13:48:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:22:13.230  
00:22:13.230    13:48:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:22:13.230    13:48:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:22:13.230    13:48:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:22:13.489   13:48:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:22:13.489    13:48:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:22:13.490    13:48:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:13.490    13:48:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:13.490    13:48:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:13.490   13:48:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:22:13.490  {
00:22:13.490  "cntlid": 85,
00:22:13.490  "qid": 0,
00:22:13.490  "state": "enabled",
00:22:13.490  "thread": "nvmf_tgt_poll_group_000",
00:22:13.490  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:22:13.490  "listen_address": {
00:22:13.490  "trtype": "RDMA",
00:22:13.490  "adrfam": "IPv4",
00:22:13.490  "traddr": "192.168.100.8",
00:22:13.490  "trsvcid": "4420"
00:22:13.490  },
00:22:13.490  "peer_address": {
00:22:13.490  "trtype": "RDMA",
00:22:13.490  "adrfam": "IPv4",
00:22:13.490  "traddr": "192.168.100.8",
00:22:13.490  "trsvcid": "37675"
00:22:13.490  },
00:22:13.490  "auth": {
00:22:13.490  "state": "completed",
00:22:13.490  "digest": "sha384",
00:22:13.490  "dhgroup": "ffdhe6144"
00:22:13.490  }
00:22:13.490  }
00:22:13.490  ]'
00:22:13.490    13:48:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:22:13.490   13:48:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:22:13.490    13:48:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:22:13.490   13:48:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]]
00:22:13.490    13:48:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:22:13.490   13:48:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:22:13.490   13:48:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:22:13.490   13:48:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:22:13.749   13:48:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjM4MWU5M2YzYjVjZDQzMTcwM2YxMzliNDdiYjc0ODQ5MmQ0OTBiODBiNzZhNzIwxZi5CA==: --dhchap-ctrl-secret DHHC-1:01:ODExNTY5YTA4ODM1OGNmN2ViMzE0MjM2OThkZjk3MTCjwNhC:
00:22:13.749   13:48:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YjM4MWU5M2YzYjVjZDQzMTcwM2YxMzliNDdiYjc0ODQ5MmQ0OTBiODBiNzZhNzIwxZi5CA==: --dhchap-ctrl-secret DHHC-1:01:ODExNTY5YTA4ODM1OGNmN2ViMzE0MjM2OThkZjk3MTCjwNhC:
00:22:14.317   13:48:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:22:14.577  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:22:14.577   13:48:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:22:14.577   13:48:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:14.577   13:48:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:14.577   13:48:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:14.577   13:48:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:22:14.577   13:48:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144
00:22:14.577   13:48:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144
00:22:14.577   13:48:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3
00:22:14.577   13:48:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:22:14.577   13:48:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:22:14.577   13:48:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144
00:22:14.577   13:48:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:22:14.577   13:48:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:22:14.578   13:48:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3
00:22:14.578   13:48:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:14.578   13:48:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:14.578   13:48:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:14.578   13:48:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:22:14.578   13:48:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:22:14.578   13:48:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:22:15.147  
00:22:15.147    13:48:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:22:15.147    13:48:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:22:15.147    13:48:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:22:15.147   13:48:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:22:15.147    13:48:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:22:15.147    13:48:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:15.147    13:48:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:15.147    13:48:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:15.147   13:48:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:22:15.147  {
00:22:15.147  "cntlid": 87,
00:22:15.147  "qid": 0,
00:22:15.147  "state": "enabled",
00:22:15.147  "thread": "nvmf_tgt_poll_group_000",
00:22:15.147  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:22:15.147  "listen_address": {
00:22:15.147  "trtype": "RDMA",
00:22:15.147  "adrfam": "IPv4",
00:22:15.147  "traddr": "192.168.100.8",
00:22:15.147  "trsvcid": "4420"
00:22:15.147  },
00:22:15.147  "peer_address": {
00:22:15.147  "trtype": "RDMA",
00:22:15.147  "adrfam": "IPv4",
00:22:15.147  "traddr": "192.168.100.8",
00:22:15.147  "trsvcid": "58405"
00:22:15.147  },
00:22:15.147  "auth": {
00:22:15.147  "state": "completed",
00:22:15.147  "digest": "sha384",
00:22:15.147  "dhgroup": "ffdhe6144"
00:22:15.147  }
00:22:15.147  }
00:22:15.147  ]'
00:22:15.147    13:48:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:22:15.147   13:48:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:22:15.147    13:48:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:22:15.406   13:48:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]]
00:22:15.406    13:48:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:22:15.406   13:48:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:22:15.406   13:48:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:22:15.406   13:48:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:22:15.665   13:48:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODEyZmE1YmJhN2Q1YjExMGE2NTcxNWJkNjcwMmE0NGM2MWQzMDQ1NmE2MWMwZmJlYzE4OTM4MzkzZjZjNjQwMgE7H48=:
00:22:15.665   13:48:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:ODEyZmE1YmJhN2Q1YjExMGE2NTcxNWJkNjcwMmE0NGM2MWQzMDQ1NmE2MWMwZmJlYzE4OTM4MzkzZjZjNjQwMgE7H48=:
00:22:16.233   13:48:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:22:16.233  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:22:16.233   13:48:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:22:16.233   13:48:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:16.233   13:48:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:16.233   13:48:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:16.233   13:48:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:22:16.233   13:48:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:22:16.233   13:48:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192
00:22:16.233   13:48:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192
00:22:16.493   13:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0
00:22:16.493   13:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:22:16.493   13:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:22:16.493   13:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192
00:22:16.493   13:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:22:16.493   13:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:22:16.493   13:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:22:16.493   13:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:16.493   13:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:16.493   13:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:16.493   13:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:22:16.493   13:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:22:16.493   13:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:22:17.062  
00:22:17.062    13:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:22:17.062    13:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:22:17.062    13:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:22:17.062   13:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:22:17.062    13:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:22:17.062    13:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:17.062    13:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:17.062    13:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:17.062   13:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:22:17.062  {
00:22:17.062  "cntlid": 89,
00:22:17.062  "qid": 0,
00:22:17.062  "state": "enabled",
00:22:17.062  "thread": "nvmf_tgt_poll_group_000",
00:22:17.062  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:22:17.062  "listen_address": {
00:22:17.062  "trtype": "RDMA",
00:22:17.062  "adrfam": "IPv4",
00:22:17.062  "traddr": "192.168.100.8",
00:22:17.062  "trsvcid": "4420"
00:22:17.062  },
00:22:17.062  "peer_address": {
00:22:17.062  "trtype": "RDMA",
00:22:17.062  "adrfam": "IPv4",
00:22:17.062  "traddr": "192.168.100.8",
00:22:17.062  "trsvcid": "37706"
00:22:17.062  },
00:22:17.062  "auth": {
00:22:17.062  "state": "completed",
00:22:17.062  "digest": "sha384",
00:22:17.062  "dhgroup": "ffdhe8192"
00:22:17.062  }
00:22:17.062  }
00:22:17.062  ]'
00:22:17.062    13:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:22:17.322   13:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:22:17.322    13:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:22:17.322   13:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]]
00:22:17.322    13:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:22:17.322   13:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:22:17.322   13:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:22:17.322   13:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:22:17.581   13:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjMwMWZmNWI5YmY1NDQ3ZDU2ZmYyYzM4ZTQxYjczZGJhYzcwZTE2ZDBmOTE5MTM0FgVPJA==: --dhchap-ctrl-secret DHHC-1:03:MDIyYTIwZDcxMDg5MTY4YTQ0NWYwYzE5YzJkYmE1NGU3MTcxNDI4YTdlMTJhMDA5NzgzZmNkOTE5OTA0OWNmOY86vbo=:
00:22:17.581   13:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:ZjMwMWZmNWI5YmY1NDQ3ZDU2ZmYyYzM4ZTQxYjczZGJhYzcwZTE2ZDBmOTE5MTM0FgVPJA==: --dhchap-ctrl-secret DHHC-1:03:MDIyYTIwZDcxMDg5MTY4YTQ0NWYwYzE5YzJkYmE1NGU3MTcxNDI4YTdlMTJhMDA5NzgzZmNkOTE5OTA0OWNmOY86vbo=:
00:22:18.150   13:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:22:18.150  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:22:18.150   13:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:22:18.150   13:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:18.150   13:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:18.150   13:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:18.150   13:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:22:18.150   13:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192
00:22:18.150   13:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192
00:22:18.410   13:48:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1
00:22:18.410   13:48:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:22:18.410   13:48:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:22:18.410   13:48:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192
00:22:18.410   13:48:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:22:18.410   13:48:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:22:18.410   13:48:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:22:18.410   13:48:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:18.410   13:48:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:18.410   13:48:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:18.410   13:48:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:22:18.410   13:48:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:22:18.410   13:48:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:22:18.979  
00:22:18.979    13:48:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:22:18.979    13:48:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:22:18.979    13:48:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:22:19.239   13:48:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:22:19.239    13:48:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:22:19.239    13:48:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:19.239    13:48:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:19.239    13:48:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:19.239   13:48:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:22:19.239  {
00:22:19.239  "cntlid": 91,
00:22:19.239  "qid": 0,
00:22:19.239  "state": "enabled",
00:22:19.239  "thread": "nvmf_tgt_poll_group_000",
00:22:19.239  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:22:19.239  "listen_address": {
00:22:19.239  "trtype": "RDMA",
00:22:19.239  "adrfam": "IPv4",
00:22:19.239  "traddr": "192.168.100.8",
00:22:19.239  "trsvcid": "4420"
00:22:19.239  },
00:22:19.239  "peer_address": {
00:22:19.239  "trtype": "RDMA",
00:22:19.239  "adrfam": "IPv4",
00:22:19.239  "traddr": "192.168.100.8",
00:22:19.239  "trsvcid": "40409"
00:22:19.239  },
00:22:19.239  "auth": {
00:22:19.239  "state": "completed",
00:22:19.239  "digest": "sha384",
00:22:19.239  "dhgroup": "ffdhe8192"
00:22:19.239  }
00:22:19.239  }
00:22:19.239  ]'
00:22:19.239    13:48:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:22:19.239   13:48:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:22:19.239    13:48:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:22:19.239   13:48:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]]
00:22:19.239    13:48:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:22:19.239   13:48:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:22:19.239   13:48:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:22:19.239   13:48:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:22:19.498   13:48:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg4OWVkZGIwZGRiZDNlNmNkZGRkYmYyNTRjZTFhZDaPKOr9: --dhchap-ctrl-secret DHHC-1:02:ZjY3MzRkZjJjMDM5MzIxMzUzZjliOTczNjVhNjUzMjA0YWVjYWEwMTIwZjNkM2Q17wM3AA==:
00:22:19.498   13:48:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:Mjg4OWVkZGIwZGRiZDNlNmNkZGRkYmYyNTRjZTFhZDaPKOr9: --dhchap-ctrl-secret DHHC-1:02:ZjY3MzRkZjJjMDM5MzIxMzUzZjliOTczNjVhNjUzMjA0YWVjYWEwMTIwZjNkM2Q17wM3AA==:
00:22:20.067   13:48:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:22:20.326  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:22:20.326   13:48:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:22:20.326   13:48:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:20.326   13:48:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:20.326   13:48:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:20.326   13:48:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:22:20.326   13:48:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192
00:22:20.326   13:48:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192
00:22:20.326   13:48:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2
00:22:20.326   13:48:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:22:20.326   13:48:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:22:20.326   13:48:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192
00:22:20.326   13:48:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:22:20.326   13:48:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:22:20.326   13:48:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:22:20.326   13:48:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:20.326   13:48:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:20.326   13:48:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:20.326   13:48:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:22:20.326   13:48:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:22:20.326   13:48:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:22:20.893  
00:22:20.893    13:48:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:22:20.893    13:48:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:22:20.893    13:48:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:22:21.151   13:48:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:22:21.151    13:48:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:22:21.151    13:48:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:21.151    13:48:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:21.151    13:48:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:21.151   13:48:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:22:21.151  {
00:22:21.151  "cntlid": 93,
00:22:21.151  "qid": 0,
00:22:21.151  "state": "enabled",
00:22:21.151  "thread": "nvmf_tgt_poll_group_000",
00:22:21.151  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:22:21.151  "listen_address": {
00:22:21.151  "trtype": "RDMA",
00:22:21.151  "adrfam": "IPv4",
00:22:21.151  "traddr": "192.168.100.8",
00:22:21.151  "trsvcid": "4420"
00:22:21.151  },
00:22:21.151  "peer_address": {
00:22:21.151  "trtype": "RDMA",
00:22:21.151  "adrfam": "IPv4",
00:22:21.151  "traddr": "192.168.100.8",
00:22:21.151  "trsvcid": "41627"
00:22:21.151  },
00:22:21.151  "auth": {
00:22:21.151  "state": "completed",
00:22:21.151  "digest": "sha384",
00:22:21.151  "dhgroup": "ffdhe8192"
00:22:21.151  }
00:22:21.151  }
00:22:21.151  ]'
00:22:21.151    13:48:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:22:21.151   13:48:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:22:21.151    13:48:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:22:21.151   13:48:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]]
00:22:21.151    13:48:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:22:21.410   13:48:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:22:21.410   13:48:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:22:21.410   13:48:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:22:21.410   13:48:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjM4MWU5M2YzYjVjZDQzMTcwM2YxMzliNDdiYjc0ODQ5MmQ0OTBiODBiNzZhNzIwxZi5CA==: --dhchap-ctrl-secret DHHC-1:01:ODExNTY5YTA4ODM1OGNmN2ViMzE0MjM2OThkZjk3MTCjwNhC:
00:22:21.411   13:48:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YjM4MWU5M2YzYjVjZDQzMTcwM2YxMzliNDdiYjc0ODQ5MmQ0OTBiODBiNzZhNzIwxZi5CA==: --dhchap-ctrl-secret DHHC-1:01:ODExNTY5YTA4ODM1OGNmN2ViMzE0MjM2OThkZjk3MTCjwNhC:
00:22:22.349   13:48:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:22:22.349  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:22:22.349   13:48:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:22:22.349   13:48:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:22.349   13:48:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:22.349   13:48:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:22.349   13:48:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:22:22.349   13:48:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192
00:22:22.349   13:48:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192
00:22:22.349   13:48:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3
00:22:22.349   13:48:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:22:22.349   13:48:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:22:22.349   13:48:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192
00:22:22.349   13:48:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:22:22.349   13:48:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:22:22.349   13:48:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3
00:22:22.349   13:48:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:22.349   13:48:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:22.349   13:48:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:22.349   13:48:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:22:22.349   13:48:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:22:22.349   13:48:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:22:22.918  
00:22:22.918    13:48:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:22:22.918    13:48:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:22:22.918    13:48:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:22:23.178   13:48:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:22:23.178    13:48:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:22:23.178    13:48:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:23.178    13:48:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:23.178    13:48:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:23.178   13:48:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:22:23.178  {
00:22:23.178  "cntlid": 95,
00:22:23.178  "qid": 0,
00:22:23.178  "state": "enabled",
00:22:23.178  "thread": "nvmf_tgt_poll_group_000",
00:22:23.178  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:22:23.178  "listen_address": {
00:22:23.178  "trtype": "RDMA",
00:22:23.178  "adrfam": "IPv4",
00:22:23.178  "traddr": "192.168.100.8",
00:22:23.178  "trsvcid": "4420"
00:22:23.178  },
00:22:23.178  "peer_address": {
00:22:23.178  "trtype": "RDMA",
00:22:23.178  "adrfam": "IPv4",
00:22:23.178  "traddr": "192.168.100.8",
00:22:23.178  "trsvcid": "48595"
00:22:23.178  },
00:22:23.178  "auth": {
00:22:23.178  "state": "completed",
00:22:23.178  "digest": "sha384",
00:22:23.178  "dhgroup": "ffdhe8192"
00:22:23.178  }
00:22:23.178  }
00:22:23.178  ]'
00:22:23.178    13:48:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:22:23.178   13:48:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:22:23.178    13:48:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:22:23.178   13:48:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]]
00:22:23.178    13:48:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:22:23.178   13:48:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:22:23.178   13:48:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:22:23.178   13:48:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:22:23.438   13:48:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODEyZmE1YmJhN2Q1YjExMGE2NTcxNWJkNjcwMmE0NGM2MWQzMDQ1NmE2MWMwZmJlYzE4OTM4MzkzZjZjNjQwMgE7H48=:
00:22:23.438   13:48:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:ODEyZmE1YmJhN2Q1YjExMGE2NTcxNWJkNjcwMmE0NGM2MWQzMDQ1NmE2MWMwZmJlYzE4OTM4MzkzZjZjNjQwMgE7H48=:
00:22:24.006   13:48:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:22:24.265  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:22:24.265   13:48:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:22:24.265   13:48:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:24.265   13:48:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:24.265   13:48:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:24.265   13:48:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}"
00:22:24.265   13:48:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:22:24.265   13:48:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:22:24.265   13:48:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null
00:22:24.265   13:48:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null
00:22:24.525   13:48:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0
00:22:24.525   13:48:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:22:24.525   13:48:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:22:24.525   13:48:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null
00:22:24.525   13:48:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:22:24.525   13:48:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:22:24.525   13:48:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:22:24.525   13:48:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:24.525   13:48:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:24.525   13:48:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:24.525   13:48:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:22:24.525   13:48:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:22:24.525   13:48:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:22:24.784  
00:22:24.784    13:48:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:22:24.784    13:48:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:22:24.784    13:48:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:22:24.784   13:48:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:22:24.784    13:48:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:22:24.784    13:48:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:24.784    13:48:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:24.784    13:48:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:24.784   13:48:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:22:24.784  {
00:22:24.784  "cntlid": 97,
00:22:24.784  "qid": 0,
00:22:24.784  "state": "enabled",
00:22:24.784  "thread": "nvmf_tgt_poll_group_000",
00:22:24.784  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:22:24.784  "listen_address": {
00:22:24.784  "trtype": "RDMA",
00:22:24.784  "adrfam": "IPv4",
00:22:24.784  "traddr": "192.168.100.8",
00:22:24.784  "trsvcid": "4420"
00:22:24.784  },
00:22:24.784  "peer_address": {
00:22:24.784  "trtype": "RDMA",
00:22:24.784  "adrfam": "IPv4",
00:22:24.784  "traddr": "192.168.100.8",
00:22:24.784  "trsvcid": "37782"
00:22:24.784  },
00:22:24.784  "auth": {
00:22:24.784  "state": "completed",
00:22:24.784  "digest": "sha512",
00:22:24.784  "dhgroup": "null"
00:22:24.784  }
00:22:24.784  }
00:22:24.784  ]'
00:22:24.784    13:48:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:22:24.784   13:48:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:22:24.784    13:48:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:22:25.043   13:48:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]]
00:22:25.044    13:48:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:22:25.044   13:48:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:22:25.044   13:48:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:22:25.044   13:48:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:22:25.303   13:48:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjMwMWZmNWI5YmY1NDQ3ZDU2ZmYyYzM4ZTQxYjczZGJhYzcwZTE2ZDBmOTE5MTM0FgVPJA==: --dhchap-ctrl-secret DHHC-1:03:MDIyYTIwZDcxMDg5MTY4YTQ0NWYwYzE5YzJkYmE1NGU3MTcxNDI4YTdlMTJhMDA5NzgzZmNkOTE5OTA0OWNmOY86vbo=:
00:22:25.303   13:48:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:ZjMwMWZmNWI5YmY1NDQ3ZDU2ZmYyYzM4ZTQxYjczZGJhYzcwZTE2ZDBmOTE5MTM0FgVPJA==: --dhchap-ctrl-secret DHHC-1:03:MDIyYTIwZDcxMDg5MTY4YTQ0NWYwYzE5YzJkYmE1NGU3MTcxNDI4YTdlMTJhMDA5NzgzZmNkOTE5OTA0OWNmOY86vbo=:
00:22:25.904   13:48:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:22:25.904  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:22:25.904   13:48:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:22:25.904   13:48:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:25.904   13:48:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:25.904   13:48:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:25.904   13:48:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:22:25.904   13:48:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null
00:22:25.904   13:48:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null
00:22:26.164   13:48:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1
00:22:26.164   13:48:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:22:26.164   13:48:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:22:26.164   13:48:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null
00:22:26.164   13:48:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:22:26.164   13:48:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:22:26.164   13:48:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:22:26.164   13:48:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:26.164   13:48:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:26.164   13:48:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:26.164   13:48:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:22:26.164   13:48:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:22:26.164   13:48:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:22:26.424  
00:22:26.424    13:48:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:22:26.424    13:48:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:22:26.424    13:48:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:22:26.684   13:48:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:22:26.684    13:48:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:22:26.684    13:48:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:26.684    13:48:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:26.684    13:48:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:26.684   13:48:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:22:26.684  {
00:22:26.684  "cntlid": 99,
00:22:26.684  "qid": 0,
00:22:26.684  "state": "enabled",
00:22:26.684  "thread": "nvmf_tgt_poll_group_000",
00:22:26.684  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:22:26.684  "listen_address": {
00:22:26.684  "trtype": "RDMA",
00:22:26.684  "adrfam": "IPv4",
00:22:26.684  "traddr": "192.168.100.8",
00:22:26.684  "trsvcid": "4420"
00:22:26.684  },
00:22:26.684  "peer_address": {
00:22:26.684  "trtype": "RDMA",
00:22:26.684  "adrfam": "IPv4",
00:22:26.684  "traddr": "192.168.100.8",
00:22:26.684  "trsvcid": "42562"
00:22:26.684  },
00:22:26.684  "auth": {
00:22:26.684  "state": "completed",
00:22:26.684  "digest": "sha512",
00:22:26.684  "dhgroup": "null"
00:22:26.684  }
00:22:26.684  }
00:22:26.684  ]'
00:22:26.684    13:48:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:22:26.684   13:48:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:22:26.684    13:48:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:22:26.684   13:48:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]]
00:22:26.684    13:48:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:22:26.684   13:48:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:22:26.684   13:48:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:22:26.684   13:48:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:22:26.944   13:48:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg4OWVkZGIwZGRiZDNlNmNkZGRkYmYyNTRjZTFhZDaPKOr9: --dhchap-ctrl-secret DHHC-1:02:ZjY3MzRkZjJjMDM5MzIxMzUzZjliOTczNjVhNjUzMjA0YWVjYWEwMTIwZjNkM2Q17wM3AA==:
00:22:26.944   13:48:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:Mjg4OWVkZGIwZGRiZDNlNmNkZGRkYmYyNTRjZTFhZDaPKOr9: --dhchap-ctrl-secret DHHC-1:02:ZjY3MzRkZjJjMDM5MzIxMzUzZjliOTczNjVhNjUzMjA0YWVjYWEwMTIwZjNkM2Q17wM3AA==:
00:22:27.513   13:48:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:22:27.773  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:22:27.773   13:48:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:22:27.773   13:48:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:27.773   13:48:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:27.773   13:48:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:27.773   13:48:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:22:27.773   13:48:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null
00:22:27.773   13:48:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null
00:22:27.773   13:48:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2
00:22:27.773   13:48:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:22:27.773   13:48:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:22:27.773   13:48:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null
00:22:27.773   13:48:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:22:27.773   13:48:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:22:27.773   13:48:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:22:27.773   13:48:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:27.773   13:48:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:27.773   13:48:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:27.773   13:48:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:22:27.773   13:48:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:22:27.773   13:48:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:22:28.033  
00:22:28.033    13:48:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:22:28.033    13:48:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:22:28.033    13:48:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:22:28.292   13:48:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:22:28.292    13:48:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:22:28.292    13:48:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:28.292    13:48:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:28.292    13:48:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:28.292   13:48:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:22:28.292  {
00:22:28.293  "cntlid": 101,
00:22:28.293  "qid": 0,
00:22:28.293  "state": "enabled",
00:22:28.293  "thread": "nvmf_tgt_poll_group_000",
00:22:28.293  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:22:28.293  "listen_address": {
00:22:28.293  "trtype": "RDMA",
00:22:28.293  "adrfam": "IPv4",
00:22:28.293  "traddr": "192.168.100.8",
00:22:28.293  "trsvcid": "4420"
00:22:28.293  },
00:22:28.293  "peer_address": {
00:22:28.293  "trtype": "RDMA",
00:22:28.293  "adrfam": "IPv4",
00:22:28.293  "traddr": "192.168.100.8",
00:22:28.293  "trsvcid": "44267"
00:22:28.293  },
00:22:28.293  "auth": {
00:22:28.293  "state": "completed",
00:22:28.293  "digest": "sha512",
00:22:28.293  "dhgroup": "null"
00:22:28.293  }
00:22:28.293  }
00:22:28.293  ]'
00:22:28.293    13:48:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:22:28.293   13:48:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:22:28.293    13:48:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:22:28.293   13:48:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]]
00:22:28.293    13:48:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:22:28.552   13:48:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:22:28.552   13:48:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:22:28.552   13:48:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:22:28.812   13:48:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjM4MWU5M2YzYjVjZDQzMTcwM2YxMzliNDdiYjc0ODQ5MmQ0OTBiODBiNzZhNzIwxZi5CA==: --dhchap-ctrl-secret DHHC-1:01:ODExNTY5YTA4ODM1OGNmN2ViMzE0MjM2OThkZjk3MTCjwNhC:
00:22:28.812   13:48:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YjM4MWU5M2YzYjVjZDQzMTcwM2YxMzliNDdiYjc0ODQ5MmQ0OTBiODBiNzZhNzIwxZi5CA==: --dhchap-ctrl-secret DHHC-1:01:ODExNTY5YTA4ODM1OGNmN2ViMzE0MjM2OThkZjk3MTCjwNhC:
00:22:29.381   13:48:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:22:29.381  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:22:29.381   13:48:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:22:29.381   13:48:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:29.381   13:48:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:29.381   13:48:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:29.381   13:48:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:22:29.381   13:48:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null
00:22:29.381   13:48:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null
00:22:29.641   13:48:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3
00:22:29.641   13:48:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:22:29.641   13:48:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:22:29.641   13:48:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null
00:22:29.641   13:48:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:22:29.641   13:48:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:22:29.641   13:48:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3
00:22:29.641   13:48:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:29.641   13:48:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:29.641   13:48:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:29.641   13:48:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:22:29.641   13:48:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:22:29.641   13:48:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:22:29.900  
00:22:29.900    13:48:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:22:29.900    13:48:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:22:29.900    13:48:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:22:30.160   13:48:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:22:30.160    13:48:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:22:30.160    13:48:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:30.160    13:48:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:30.160    13:48:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:30.160   13:48:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:22:30.160  {
00:22:30.160  "cntlid": 103,
00:22:30.160  "qid": 0,
00:22:30.160  "state": "enabled",
00:22:30.160  "thread": "nvmf_tgt_poll_group_000",
00:22:30.160  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:22:30.160  "listen_address": {
00:22:30.160  "trtype": "RDMA",
00:22:30.160  "adrfam": "IPv4",
00:22:30.160  "traddr": "192.168.100.8",
00:22:30.160  "trsvcid": "4420"
00:22:30.160  },
00:22:30.160  "peer_address": {
00:22:30.160  "trtype": "RDMA",
00:22:30.160  "adrfam": "IPv4",
00:22:30.160  "traddr": "192.168.100.8",
00:22:30.160  "trsvcid": "53264"
00:22:30.160  },
00:22:30.160  "auth": {
00:22:30.160  "state": "completed",
00:22:30.160  "digest": "sha512",
00:22:30.160  "dhgroup": "null"
00:22:30.160  }
00:22:30.160  }
00:22:30.160  ]'
00:22:30.160    13:48:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:22:30.160   13:48:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:22:30.160    13:48:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:22:30.160   13:48:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]]
00:22:30.160    13:48:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:22:30.160   13:48:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:22:30.160   13:48:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:22:30.160   13:48:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:22:30.419   13:48:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODEyZmE1YmJhN2Q1YjExMGE2NTcxNWJkNjcwMmE0NGM2MWQzMDQ1NmE2MWMwZmJlYzE4OTM4MzkzZjZjNjQwMgE7H48=:
00:22:30.419   13:48:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:ODEyZmE1YmJhN2Q1YjExMGE2NTcxNWJkNjcwMmE0NGM2MWQzMDQ1NmE2MWMwZmJlYzE4OTM4MzkzZjZjNjQwMgE7H48=:
00:22:30.987   13:48:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:22:31.246  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:22:31.246   13:48:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:22:31.246   13:48:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:31.246   13:48:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:31.246   13:48:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:31.246   13:48:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:22:31.246   13:48:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:22:31.247   13:48:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048
00:22:31.247   13:48:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048
00:22:31.247   13:48:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0
00:22:31.247   13:48:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:22:31.247   13:48:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:22:31.247   13:48:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048
00:22:31.247   13:48:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:22:31.247   13:48:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:22:31.247   13:48:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:22:31.247   13:48:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:31.247   13:48:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:31.247   13:48:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:31.247   13:48:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:22:31.247   13:48:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:22:31.247   13:48:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:22:31.506  
00:22:31.506    13:48:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:22:31.506    13:48:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:22:31.506    13:48:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:22:31.765   13:48:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:22:31.765    13:48:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:22:31.765    13:48:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:31.765    13:48:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:31.765    13:48:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:31.765   13:48:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:22:31.765  {
00:22:31.765  "cntlid": 105,
00:22:31.765  "qid": 0,
00:22:31.765  "state": "enabled",
00:22:31.765  "thread": "nvmf_tgt_poll_group_000",
00:22:31.765  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:22:31.765  "listen_address": {
00:22:31.765  "trtype": "RDMA",
00:22:31.765  "adrfam": "IPv4",
00:22:31.765  "traddr": "192.168.100.8",
00:22:31.765  "trsvcid": "4420"
00:22:31.765  },
00:22:31.765  "peer_address": {
00:22:31.765  "trtype": "RDMA",
00:22:31.765  "adrfam": "IPv4",
00:22:31.765  "traddr": "192.168.100.8",
00:22:31.765  "trsvcid": "56133"
00:22:31.765  },
00:22:31.765  "auth": {
00:22:31.765  "state": "completed",
00:22:31.765  "digest": "sha512",
00:22:31.765  "dhgroup": "ffdhe2048"
00:22:31.765  }
00:22:31.765  }
00:22:31.765  ]'
00:22:31.765    13:48:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:22:31.765   13:48:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:22:31.765    13:48:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:22:32.024   13:48:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]]
00:22:32.024    13:48:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:22:32.024   13:48:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:22:32.024   13:48:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:22:32.025   13:48:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:22:32.375   13:48:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjMwMWZmNWI5YmY1NDQ3ZDU2ZmYyYzM4ZTQxYjczZGJhYzcwZTE2ZDBmOTE5MTM0FgVPJA==: --dhchap-ctrl-secret DHHC-1:03:MDIyYTIwZDcxMDg5MTY4YTQ0NWYwYzE5YzJkYmE1NGU3MTcxNDI4YTdlMTJhMDA5NzgzZmNkOTE5OTA0OWNmOY86vbo=:
00:22:32.375   13:48:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:ZjMwMWZmNWI5YmY1NDQ3ZDU2ZmYyYzM4ZTQxYjczZGJhYzcwZTE2ZDBmOTE5MTM0FgVPJA==: --dhchap-ctrl-secret DHHC-1:03:MDIyYTIwZDcxMDg5MTY4YTQ0NWYwYzE5YzJkYmE1NGU3MTcxNDI4YTdlMTJhMDA5NzgzZmNkOTE5OTA0OWNmOY86vbo=:
00:22:32.943   13:48:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:22:32.943  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:22:32.943   13:48:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:22:32.944   13:48:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:32.944   13:48:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:32.944   13:48:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:32.944   13:48:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:22:32.944   13:48:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048
00:22:32.944   13:48:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048
00:22:33.203   13:48:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1
00:22:33.203   13:48:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:22:33.204   13:48:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:22:33.204   13:48:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048
00:22:33.204   13:48:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:22:33.204   13:48:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:22:33.204   13:48:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:22:33.204   13:48:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:33.204   13:48:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:33.204   13:48:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:33.204   13:48:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:22:33.204   13:48:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:22:33.204   13:48:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:22:33.463  
00:22:33.463    13:48:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:22:33.463    13:48:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:22:33.463    13:48:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:22:33.722   13:48:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:22:33.722    13:48:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:22:33.722    13:48:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:33.722    13:48:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:33.722    13:48:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:33.722   13:48:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:22:33.722  {
00:22:33.722  "cntlid": 107,
00:22:33.722  "qid": 0,
00:22:33.722  "state": "enabled",
00:22:33.722  "thread": "nvmf_tgt_poll_group_000",
00:22:33.722  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:22:33.722  "listen_address": {
00:22:33.722  "trtype": "RDMA",
00:22:33.722  "adrfam": "IPv4",
00:22:33.722  "traddr": "192.168.100.8",
00:22:33.722  "trsvcid": "4420"
00:22:33.722  },
00:22:33.722  "peer_address": {
00:22:33.722  "trtype": "RDMA",
00:22:33.722  "adrfam": "IPv4",
00:22:33.722  "traddr": "192.168.100.8",
00:22:33.722  "trsvcid": "54849"
00:22:33.722  },
00:22:33.722  "auth": {
00:22:33.722  "state": "completed",
00:22:33.723  "digest": "sha512",
00:22:33.723  "dhgroup": "ffdhe2048"
00:22:33.723  }
00:22:33.723  }
00:22:33.723  ]'
00:22:33.723    13:48:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:22:33.723   13:48:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:22:33.723    13:48:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:22:33.723   13:48:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]]
00:22:33.723    13:48:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:22:33.723   13:48:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:22:33.723   13:48:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:22:33.723   13:48:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:22:33.981   13:48:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg4OWVkZGIwZGRiZDNlNmNkZGRkYmYyNTRjZTFhZDaPKOr9: --dhchap-ctrl-secret DHHC-1:02:ZjY3MzRkZjJjMDM5MzIxMzUzZjliOTczNjVhNjUzMjA0YWVjYWEwMTIwZjNkM2Q17wM3AA==:
00:22:33.981   13:48:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:Mjg4OWVkZGIwZGRiZDNlNmNkZGRkYmYyNTRjZTFhZDaPKOr9: --dhchap-ctrl-secret DHHC-1:02:ZjY3MzRkZjJjMDM5MzIxMzUzZjliOTczNjVhNjUzMjA0YWVjYWEwMTIwZjNkM2Q17wM3AA==:
00:22:34.549   13:48:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:22:34.808  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:22:34.808   13:48:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:22:34.808   13:48:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:34.808   13:48:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:34.808   13:48:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:34.808   13:48:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:22:34.808   13:48:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048
00:22:34.809   13:48:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048
00:22:34.809   13:48:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2
00:22:34.809   13:48:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:22:34.809   13:48:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:22:34.809   13:48:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048
00:22:34.809   13:48:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:22:34.809   13:48:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:22:34.809   13:48:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:22:34.809   13:48:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:34.809   13:48:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:34.809   13:48:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:34.809   13:48:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:22:34.809   13:48:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:22:34.809   13:48:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:22:35.068  
00:22:35.068    13:48:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:22:35.068    13:48:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:22:35.068    13:48:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:22:35.327   13:48:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:22:35.327    13:48:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:22:35.327    13:48:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:35.327    13:48:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:35.327    13:48:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:35.327   13:48:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:22:35.327  {
00:22:35.327  "cntlid": 109,
00:22:35.327  "qid": 0,
00:22:35.327  "state": "enabled",
00:22:35.327  "thread": "nvmf_tgt_poll_group_000",
00:22:35.327  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:22:35.327  "listen_address": {
00:22:35.327  "trtype": "RDMA",
00:22:35.327  "adrfam": "IPv4",
00:22:35.327  "traddr": "192.168.100.8",
00:22:35.327  "trsvcid": "4420"
00:22:35.327  },
00:22:35.327  "peer_address": {
00:22:35.327  "trtype": "RDMA",
00:22:35.327  "adrfam": "IPv4",
00:22:35.328  "traddr": "192.168.100.8",
00:22:35.328  "trsvcid": "48134"
00:22:35.328  },
00:22:35.328  "auth": {
00:22:35.328  "state": "completed",
00:22:35.328  "digest": "sha512",
00:22:35.328  "dhgroup": "ffdhe2048"
00:22:35.328  }
00:22:35.328  }
00:22:35.328  ]'
00:22:35.328    13:48:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:22:35.328   13:48:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:22:35.328    13:48:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:22:35.328   13:48:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]]
00:22:35.328    13:48:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:22:35.586   13:48:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:22:35.586   13:48:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:22:35.586   13:48:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:22:35.586   13:48:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjM4MWU5M2YzYjVjZDQzMTcwM2YxMzliNDdiYjc0ODQ5MmQ0OTBiODBiNzZhNzIwxZi5CA==: --dhchap-ctrl-secret DHHC-1:01:ODExNTY5YTA4ODM1OGNmN2ViMzE0MjM2OThkZjk3MTCjwNhC:
00:22:35.586   13:48:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YjM4MWU5M2YzYjVjZDQzMTcwM2YxMzliNDdiYjc0ODQ5MmQ0OTBiODBiNzZhNzIwxZi5CA==: --dhchap-ctrl-secret DHHC-1:01:ODExNTY5YTA4ODM1OGNmN2ViMzE0MjM2OThkZjk3MTCjwNhC:
00:22:36.523   13:48:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:22:36.523  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:22:36.523   13:48:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:22:36.523   13:48:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:36.523   13:48:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:36.523   13:48:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:36.523   13:48:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:22:36.523   13:48:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048
00:22:36.523   13:48:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048
00:22:36.524   13:48:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3
00:22:36.524   13:48:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:22:36.524   13:48:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:22:36.524   13:48:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048
00:22:36.524   13:48:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:22:36.524   13:48:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:22:36.524   13:48:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3
00:22:36.524   13:48:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:36.524   13:48:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:36.524   13:48:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:36.524   13:48:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:22:36.524   13:48:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:22:36.524   13:48:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:22:36.783  
00:22:36.783    13:48:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:22:36.783    13:48:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:22:36.783    13:48:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:22:37.042   13:48:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:22:37.042    13:48:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:22:37.042    13:48:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:37.042    13:48:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:37.042    13:48:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:37.042   13:48:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:22:37.042  {
00:22:37.042  "cntlid": 111,
00:22:37.042  "qid": 0,
00:22:37.042  "state": "enabled",
00:22:37.042  "thread": "nvmf_tgt_poll_group_000",
00:22:37.042  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:22:37.042  "listen_address": {
00:22:37.042  "trtype": "RDMA",
00:22:37.042  "adrfam": "IPv4",
00:22:37.042  "traddr": "192.168.100.8",
00:22:37.042  "trsvcid": "4420"
00:22:37.042  },
00:22:37.042  "peer_address": {
00:22:37.042  "trtype": "RDMA",
00:22:37.042  "adrfam": "IPv4",
00:22:37.042  "traddr": "192.168.100.8",
00:22:37.042  "trsvcid": "34208"
00:22:37.042  },
00:22:37.042  "auth": {
00:22:37.042  "state": "completed",
00:22:37.042  "digest": "sha512",
00:22:37.042  "dhgroup": "ffdhe2048"
00:22:37.042  }
00:22:37.042  }
00:22:37.042  ]'
00:22:37.042    13:48:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:22:37.042   13:48:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:22:37.042    13:48:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:22:37.301   13:48:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]]
00:22:37.301    13:48:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:22:37.301   13:48:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:22:37.301   13:48:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:22:37.301   13:48:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:22:37.560   13:48:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODEyZmE1YmJhN2Q1YjExMGE2NTcxNWJkNjcwMmE0NGM2MWQzMDQ1NmE2MWMwZmJlYzE4OTM4MzkzZjZjNjQwMgE7H48=:
00:22:37.560   13:48:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:ODEyZmE1YmJhN2Q1YjExMGE2NTcxNWJkNjcwMmE0NGM2MWQzMDQ1NmE2MWMwZmJlYzE4OTM4MzkzZjZjNjQwMgE7H48=:
00:22:38.128   13:48:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:22:38.128  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:22:38.128   13:48:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:22:38.128   13:48:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:38.128   13:48:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:38.128   13:48:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:38.128   13:48:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:22:38.128   13:48:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:22:38.128   13:48:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072
00:22:38.128   13:48:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072
00:22:38.388   13:48:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0
00:22:38.388   13:48:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:22:38.388   13:48:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:22:38.388   13:48:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072
00:22:38.388   13:48:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:22:38.388   13:48:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:22:38.388   13:48:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:22:38.388   13:48:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:38.388   13:48:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:38.388   13:48:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:38.388   13:48:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:22:38.388   13:48:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:22:38.388   13:48:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:22:38.648  
00:22:38.648    13:48:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:22:38.648    13:48:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:22:38.648    13:48:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:22:38.908   13:48:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:22:38.908    13:48:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:22:38.908    13:48:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:38.908    13:48:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:38.908    13:48:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:38.908   13:48:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:22:38.908  {
00:22:38.908  "cntlid": 113,
00:22:38.908  "qid": 0,
00:22:38.908  "state": "enabled",
00:22:38.908  "thread": "nvmf_tgt_poll_group_000",
00:22:38.908  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:22:38.908  "listen_address": {
00:22:38.908  "trtype": "RDMA",
00:22:38.908  "adrfam": "IPv4",
00:22:38.908  "traddr": "192.168.100.8",
00:22:38.908  "trsvcid": "4420"
00:22:38.908  },
00:22:38.908  "peer_address": {
00:22:38.908  "trtype": "RDMA",
00:22:38.908  "adrfam": "IPv4",
00:22:38.908  "traddr": "192.168.100.8",
00:22:38.908  "trsvcid": "46152"
00:22:38.908  },
00:22:38.908  "auth": {
00:22:38.908  "state": "completed",
00:22:38.908  "digest": "sha512",
00:22:38.908  "dhgroup": "ffdhe3072"
00:22:38.908  }
00:22:38.908  }
00:22:38.908  ]'
00:22:38.908    13:48:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:22:38.908   13:48:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:22:38.908    13:48:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:22:38.908   13:48:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]]
00:22:38.908    13:48:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:22:38.908   13:48:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:22:38.908   13:48:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:22:38.908   13:48:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:22:39.167   13:48:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjMwMWZmNWI5YmY1NDQ3ZDU2ZmYyYzM4ZTQxYjczZGJhYzcwZTE2ZDBmOTE5MTM0FgVPJA==: --dhchap-ctrl-secret DHHC-1:03:MDIyYTIwZDcxMDg5MTY4YTQ0NWYwYzE5YzJkYmE1NGU3MTcxNDI4YTdlMTJhMDA5NzgzZmNkOTE5OTA0OWNmOY86vbo=:
00:22:39.167   13:48:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:ZjMwMWZmNWI5YmY1NDQ3ZDU2ZmYyYzM4ZTQxYjczZGJhYzcwZTE2ZDBmOTE5MTM0FgVPJA==: --dhchap-ctrl-secret DHHC-1:03:MDIyYTIwZDcxMDg5MTY4YTQ0NWYwYzE5YzJkYmE1NGU3MTcxNDI4YTdlMTJhMDA5NzgzZmNkOTE5OTA0OWNmOY86vbo=:
00:22:39.736   13:48:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:22:39.996  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:22:39.996   13:48:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:22:39.996   13:48:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:39.996   13:48:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:39.996   13:48:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:39.996   13:48:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:22:39.996   13:48:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072
00:22:39.996   13:48:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072
00:22:40.256   13:48:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1
00:22:40.256   13:48:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:22:40.256   13:48:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:22:40.256   13:48:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072
00:22:40.256   13:48:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:22:40.256   13:48:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:22:40.256   13:48:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:22:40.256   13:48:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:40.256   13:48:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:40.256   13:48:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:40.256   13:48:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:22:40.256   13:48:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:22:40.256   13:48:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:22:40.515  
00:22:40.515    13:48:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:22:40.515    13:48:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:22:40.515    13:48:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:22:40.515   13:48:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:22:40.515    13:48:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:22:40.515    13:48:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:40.515    13:48:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:40.515    13:48:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:40.515   13:48:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:22:40.515  {
00:22:40.515  "cntlid": 115,
00:22:40.515  "qid": 0,
00:22:40.515  "state": "enabled",
00:22:40.515  "thread": "nvmf_tgt_poll_group_000",
00:22:40.515  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:22:40.515  "listen_address": {
00:22:40.515  "trtype": "RDMA",
00:22:40.515  "adrfam": "IPv4",
00:22:40.515  "traddr": "192.168.100.8",
00:22:40.515  "trsvcid": "4420"
00:22:40.515  },
00:22:40.515  "peer_address": {
00:22:40.515  "trtype": "RDMA",
00:22:40.515  "adrfam": "IPv4",
00:22:40.515  "traddr": "192.168.100.8",
00:22:40.515  "trsvcid": "51829"
00:22:40.515  },
00:22:40.515  "auth": {
00:22:40.515  "state": "completed",
00:22:40.515  "digest": "sha512",
00:22:40.515  "dhgroup": "ffdhe3072"
00:22:40.515  }
00:22:40.515  }
00:22:40.515  ]'
00:22:40.515    13:48:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:22:40.775   13:48:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:22:40.775    13:48:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:22:40.775   13:48:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]]
00:22:40.775    13:48:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:22:40.775   13:48:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:22:40.775   13:48:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:22:40.775   13:48:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:22:41.034   13:48:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg4OWVkZGIwZGRiZDNlNmNkZGRkYmYyNTRjZTFhZDaPKOr9: --dhchap-ctrl-secret DHHC-1:02:ZjY3MzRkZjJjMDM5MzIxMzUzZjliOTczNjVhNjUzMjA0YWVjYWEwMTIwZjNkM2Q17wM3AA==:
00:22:41.034   13:48:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:Mjg4OWVkZGIwZGRiZDNlNmNkZGRkYmYyNTRjZTFhZDaPKOr9: --dhchap-ctrl-secret DHHC-1:02:ZjY3MzRkZjJjMDM5MzIxMzUzZjliOTczNjVhNjUzMjA0YWVjYWEwMTIwZjNkM2Q17wM3AA==:
00:22:41.603   13:48:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:22:41.603  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:22:41.603   13:48:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:22:41.603   13:48:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:41.603   13:48:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:41.603   13:48:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:41.603   13:48:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:22:41.603   13:48:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072
00:22:41.603   13:48:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072
00:22:41.862   13:48:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2
00:22:41.862   13:48:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:22:41.862   13:48:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:22:41.862   13:48:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072
00:22:41.862   13:48:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:22:41.862   13:48:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:22:41.862   13:48:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:22:41.862   13:48:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:41.862   13:48:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:41.862   13:48:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:41.863   13:48:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:22:41.863   13:48:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:22:41.863   13:48:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:22:42.122  
00:22:42.122    13:48:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:22:42.122    13:48:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:22:42.122    13:48:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:22:42.381   13:48:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:22:42.381    13:48:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:22:42.381    13:48:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:42.381    13:48:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:42.381    13:48:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:42.381   13:48:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:22:42.381  {
00:22:42.381  "cntlid": 117,
00:22:42.381  "qid": 0,
00:22:42.381  "state": "enabled",
00:22:42.381  "thread": "nvmf_tgt_poll_group_000",
00:22:42.381  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:22:42.381  "listen_address": {
00:22:42.381  "trtype": "RDMA",
00:22:42.381  "adrfam": "IPv4",
00:22:42.381  "traddr": "192.168.100.8",
00:22:42.381  "trsvcid": "4420"
00:22:42.381  },
00:22:42.381  "peer_address": {
00:22:42.381  "trtype": "RDMA",
00:22:42.381  "adrfam": "IPv4",
00:22:42.381  "traddr": "192.168.100.8",
00:22:42.381  "trsvcid": "43538"
00:22:42.381  },
00:22:42.382  "auth": {
00:22:42.382  "state": "completed",
00:22:42.382  "digest": "sha512",
00:22:42.382  "dhgroup": "ffdhe3072"
00:22:42.382  }
00:22:42.382  }
00:22:42.382  ]'
00:22:42.382    13:48:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:22:42.382   13:48:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:22:42.382    13:48:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:22:42.382   13:48:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]]
00:22:42.382    13:48:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:22:42.641   13:48:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:22:42.641   13:48:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:22:42.641   13:48:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:22:42.641   13:48:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjM4MWU5M2YzYjVjZDQzMTcwM2YxMzliNDdiYjc0ODQ5MmQ0OTBiODBiNzZhNzIwxZi5CA==: --dhchap-ctrl-secret DHHC-1:01:ODExNTY5YTA4ODM1OGNmN2ViMzE0MjM2OThkZjk3MTCjwNhC:
00:22:42.641   13:48:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YjM4MWU5M2YzYjVjZDQzMTcwM2YxMzliNDdiYjc0ODQ5MmQ0OTBiODBiNzZhNzIwxZi5CA==: --dhchap-ctrl-secret DHHC-1:01:ODExNTY5YTA4ODM1OGNmN2ViMzE0MjM2OThkZjk3MTCjwNhC:
00:22:43.579   13:48:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:22:43.579  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:22:43.579   13:48:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:22:43.579   13:48:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:43.579   13:48:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:43.579   13:48:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:43.579   13:48:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:22:43.579   13:48:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072
00:22:43.579   13:48:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072
00:22:43.579   13:48:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3
00:22:43.579   13:48:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:22:43.579   13:48:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:22:43.579   13:48:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072
00:22:43.579   13:48:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:22:43.579   13:48:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:22:43.579   13:48:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3
00:22:43.579   13:48:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:43.579   13:48:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:43.579   13:48:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:43.579   13:48:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:22:43.579   13:48:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:22:43.579   13:48:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:22:43.839  
00:22:43.839    13:48:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:22:43.839    13:48:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:22:43.839    13:48:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:22:44.099   13:48:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:22:44.099    13:48:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:22:44.099    13:48:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:44.099    13:48:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:44.099    13:48:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:44.099   13:48:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:22:44.099  {
00:22:44.099  "cntlid": 119,
00:22:44.099  "qid": 0,
00:22:44.099  "state": "enabled",
00:22:44.099  "thread": "nvmf_tgt_poll_group_000",
00:22:44.099  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:22:44.099  "listen_address": {
00:22:44.099  "trtype": "RDMA",
00:22:44.099  "adrfam": "IPv4",
00:22:44.099  "traddr": "192.168.100.8",
00:22:44.099  "trsvcid": "4420"
00:22:44.099  },
00:22:44.099  "peer_address": {
00:22:44.099  "trtype": "RDMA",
00:22:44.099  "adrfam": "IPv4",
00:22:44.099  "traddr": "192.168.100.8",
00:22:44.099  "trsvcid": "52323"
00:22:44.099  },
00:22:44.099  "auth": {
00:22:44.099  "state": "completed",
00:22:44.099  "digest": "sha512",
00:22:44.099  "dhgroup": "ffdhe3072"
00:22:44.099  }
00:22:44.099  }
00:22:44.099  ]'
00:22:44.099    13:48:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:22:44.099   13:48:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:22:44.099    13:48:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:22:44.358   13:48:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]]
00:22:44.358    13:48:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:22:44.358   13:48:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:22:44.358   13:48:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:22:44.358   13:48:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:22:44.618   13:48:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODEyZmE1YmJhN2Q1YjExMGE2NTcxNWJkNjcwMmE0NGM2MWQzMDQ1NmE2MWMwZmJlYzE4OTM4MzkzZjZjNjQwMgE7H48=:
00:22:44.618   13:48:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:ODEyZmE1YmJhN2Q1YjExMGE2NTcxNWJkNjcwMmE0NGM2MWQzMDQ1NmE2MWMwZmJlYzE4OTM4MzkzZjZjNjQwMgE7H48=:
00:22:45.187   13:48:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:22:45.187  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:22:45.187   13:48:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:22:45.187   13:48:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:45.187   13:48:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:45.187   13:48:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:45.187   13:48:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:22:45.187   13:48:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:22:45.187   13:48:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096
00:22:45.187   13:48:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096
00:22:45.448   13:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0
00:22:45.448   13:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:22:45.448   13:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:22:45.448   13:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096
00:22:45.448   13:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:22:45.448   13:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:22:45.448   13:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:22:45.448   13:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:45.448   13:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:45.448   13:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:45.448   13:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:22:45.448   13:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:22:45.448   13:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:22:45.707  
00:22:45.707    13:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:22:45.707    13:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:22:45.707    13:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:22:45.966   13:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:22:45.966    13:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:22:45.966    13:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:45.966    13:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:45.966    13:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:45.966   13:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:22:45.966  {
00:22:45.966  "cntlid": 121,
00:22:45.966  "qid": 0,
00:22:45.966  "state": "enabled",
00:22:45.966  "thread": "nvmf_tgt_poll_group_000",
00:22:45.966  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:22:45.966  "listen_address": {
00:22:45.966  "trtype": "RDMA",
00:22:45.966  "adrfam": "IPv4",
00:22:45.966  "traddr": "192.168.100.8",
00:22:45.966  "trsvcid": "4420"
00:22:45.966  },
00:22:45.966  "peer_address": {
00:22:45.966  "trtype": "RDMA",
00:22:45.966  "adrfam": "IPv4",
00:22:45.966  "traddr": "192.168.100.8",
00:22:45.966  "trsvcid": "43214"
00:22:45.966  },
00:22:45.966  "auth": {
00:22:45.966  "state": "completed",
00:22:45.966  "digest": "sha512",
00:22:45.966  "dhgroup": "ffdhe4096"
00:22:45.966  }
00:22:45.966  }
00:22:45.966  ]'
00:22:45.966    13:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:22:45.966   13:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:22:45.966    13:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:22:45.966   13:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]]
00:22:45.966    13:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:22:45.966   13:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:22:45.966   13:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:22:45.966   13:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:22:46.226   13:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjMwMWZmNWI5YmY1NDQ3ZDU2ZmYyYzM4ZTQxYjczZGJhYzcwZTE2ZDBmOTE5MTM0FgVPJA==: --dhchap-ctrl-secret DHHC-1:03:MDIyYTIwZDcxMDg5MTY4YTQ0NWYwYzE5YzJkYmE1NGU3MTcxNDI4YTdlMTJhMDA5NzgzZmNkOTE5OTA0OWNmOY86vbo=:
00:22:46.226   13:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:ZjMwMWZmNWI5YmY1NDQ3ZDU2ZmYyYzM4ZTQxYjczZGJhYzcwZTE2ZDBmOTE5MTM0FgVPJA==: --dhchap-ctrl-secret DHHC-1:03:MDIyYTIwZDcxMDg5MTY4YTQ0NWYwYzE5YzJkYmE1NGU3MTcxNDI4YTdlMTJhMDA5NzgzZmNkOTE5OTA0OWNmOY86vbo=:
00:22:46.794   13:48:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:22:47.054  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:22:47.054   13:48:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:22:47.054   13:48:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:47.054   13:48:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:47.054   13:48:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:47.054   13:48:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:22:47.054   13:48:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096
00:22:47.054   13:48:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096
00:22:47.054   13:48:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1
00:22:47.054   13:48:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:22:47.054   13:48:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:22:47.054   13:48:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096
00:22:47.054   13:48:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:22:47.054   13:48:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:22:47.054   13:48:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:22:47.054   13:48:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:47.054   13:48:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:47.313   13:48:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:47.313   13:48:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:22:47.313   13:48:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:22:47.313   13:48:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:22:47.573  
00:22:47.573    13:48:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:22:47.573    13:48:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:22:47.573    13:48:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:22:47.573   13:48:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:22:47.573    13:48:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:22:47.573    13:48:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:47.573    13:48:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:47.832    13:48:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:47.832   13:48:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:22:47.832  {
00:22:47.832  "cntlid": 123,
00:22:47.832  "qid": 0,
00:22:47.832  "state": "enabled",
00:22:47.832  "thread": "nvmf_tgt_poll_group_000",
00:22:47.832  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:22:47.832  "listen_address": {
00:22:47.832  "trtype": "RDMA",
00:22:47.832  "adrfam": "IPv4",
00:22:47.832  "traddr": "192.168.100.8",
00:22:47.832  "trsvcid": "4420"
00:22:47.832  },
00:22:47.832  "peer_address": {
00:22:47.832  "trtype": "RDMA",
00:22:47.832  "adrfam": "IPv4",
00:22:47.832  "traddr": "192.168.100.8",
00:22:47.832  "trsvcid": "58874"
00:22:47.832  },
00:22:47.832  "auth": {
00:22:47.832  "state": "completed",
00:22:47.832  "digest": "sha512",
00:22:47.832  "dhgroup": "ffdhe4096"
00:22:47.832  }
00:22:47.832  }
00:22:47.832  ]'
00:22:47.832    13:48:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:22:47.832   13:48:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:22:47.832    13:48:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:22:47.832   13:48:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]]
00:22:47.832    13:48:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:22:47.832   13:48:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:22:47.832   13:48:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:22:47.832   13:48:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:22:48.091   13:48:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg4OWVkZGIwZGRiZDNlNmNkZGRkYmYyNTRjZTFhZDaPKOr9: --dhchap-ctrl-secret DHHC-1:02:ZjY3MzRkZjJjMDM5MzIxMzUzZjliOTczNjVhNjUzMjA0YWVjYWEwMTIwZjNkM2Q17wM3AA==:
00:22:48.091   13:48:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:Mjg4OWVkZGIwZGRiZDNlNmNkZGRkYmYyNTRjZTFhZDaPKOr9: --dhchap-ctrl-secret DHHC-1:02:ZjY3MzRkZjJjMDM5MzIxMzUzZjliOTczNjVhNjUzMjA0YWVjYWEwMTIwZjNkM2Q17wM3AA==:
00:22:48.659   13:48:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:22:48.659  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:22:48.659   13:48:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:22:48.660   13:48:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:48.660   13:48:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:48.660   13:48:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:48.660   13:48:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:22:48.660   13:48:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096
00:22:48.660   13:48:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096
00:22:48.919   13:48:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2
00:22:48.919   13:48:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:22:48.919   13:48:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:22:48.919   13:48:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096
00:22:48.919   13:48:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:22:48.919   13:48:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:22:48.919   13:48:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:22:48.919   13:48:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:48.919   13:48:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:48.919   13:48:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:48.919   13:48:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:22:48.919   13:48:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:22:48.919   13:48:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:22:49.179  
00:22:49.179    13:48:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:22:49.179    13:48:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:22:49.179    13:48:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:22:49.438   13:48:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:22:49.438    13:48:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:22:49.438    13:48:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:49.438    13:48:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:49.438    13:48:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:49.438   13:48:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:22:49.438  {
00:22:49.438  "cntlid": 125,
00:22:49.438  "qid": 0,
00:22:49.438  "state": "enabled",
00:22:49.438  "thread": "nvmf_tgt_poll_group_000",
00:22:49.438  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:22:49.438  "listen_address": {
00:22:49.438  "trtype": "RDMA",
00:22:49.438  "adrfam": "IPv4",
00:22:49.438  "traddr": "192.168.100.8",
00:22:49.438  "trsvcid": "4420"
00:22:49.438  },
00:22:49.438  "peer_address": {
00:22:49.438  "trtype": "RDMA",
00:22:49.438  "adrfam": "IPv4",
00:22:49.438  "traddr": "192.168.100.8",
00:22:49.438  "trsvcid": "60553"
00:22:49.438  },
00:22:49.438  "auth": {
00:22:49.438  "state": "completed",
00:22:49.438  "digest": "sha512",
00:22:49.438  "dhgroup": "ffdhe4096"
00:22:49.438  }
00:22:49.438  }
00:22:49.438  ]'
00:22:49.438    13:48:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:22:49.438   13:48:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:22:49.439    13:48:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:22:49.439   13:48:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]]
00:22:49.439    13:48:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:22:49.698   13:48:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:22:49.698   13:48:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:22:49.698   13:48:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:22:49.698   13:48:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjM4MWU5M2YzYjVjZDQzMTcwM2YxMzliNDdiYjc0ODQ5MmQ0OTBiODBiNzZhNzIwxZi5CA==: --dhchap-ctrl-secret DHHC-1:01:ODExNTY5YTA4ODM1OGNmN2ViMzE0MjM2OThkZjk3MTCjwNhC:
00:22:49.698   13:48:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YjM4MWU5M2YzYjVjZDQzMTcwM2YxMzliNDdiYjc0ODQ5MmQ0OTBiODBiNzZhNzIwxZi5CA==: --dhchap-ctrl-secret DHHC-1:01:ODExNTY5YTA4ODM1OGNmN2ViMzE0MjM2OThkZjk3MTCjwNhC:
00:22:50.636   13:48:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:22:50.636  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:22:50.636   13:48:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:22:50.636   13:48:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:50.636   13:48:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:50.636   13:48:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:50.636   13:48:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:22:50.636   13:48:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096
00:22:50.636   13:48:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096
00:22:50.636   13:48:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3
00:22:50.636   13:48:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:22:50.636   13:48:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:22:50.636   13:48:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096
00:22:50.636   13:48:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:22:50.636   13:48:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:22:50.636   13:48:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3
00:22:50.636   13:48:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:50.636   13:48:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:50.636   13:48:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:50.636   13:48:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:22:50.636   13:48:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:22:50.636   13:48:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:22:51.204  
00:22:51.204    13:48:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:22:51.204    13:48:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:22:51.204    13:48:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:22:51.204   13:48:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:22:51.204    13:48:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:22:51.204    13:48:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:51.204    13:48:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:51.204    13:48:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:51.204   13:48:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:22:51.204  {
00:22:51.204  "cntlid": 127,
00:22:51.204  "qid": 0,
00:22:51.204  "state": "enabled",
00:22:51.204  "thread": "nvmf_tgt_poll_group_000",
00:22:51.204  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:22:51.204  "listen_address": {
00:22:51.204  "trtype": "RDMA",
00:22:51.204  "adrfam": "IPv4",
00:22:51.204  "traddr": "192.168.100.8",
00:22:51.204  "trsvcid": "4420"
00:22:51.204  },
00:22:51.204  "peer_address": {
00:22:51.204  "trtype": "RDMA",
00:22:51.204  "adrfam": "IPv4",
00:22:51.204  "traddr": "192.168.100.8",
00:22:51.204  "trsvcid": "33879"
00:22:51.204  },
00:22:51.204  "auth": {
00:22:51.204  "state": "completed",
00:22:51.204  "digest": "sha512",
00:22:51.204  "dhgroup": "ffdhe4096"
00:22:51.204  }
00:22:51.204  }
00:22:51.204  ]'
00:22:51.204    13:48:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:22:51.204   13:48:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:22:51.204    13:48:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:22:51.204   13:48:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]]
00:22:51.463    13:48:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:22:51.463   13:48:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:22:51.463   13:48:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:22:51.463   13:48:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:22:51.463   13:48:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODEyZmE1YmJhN2Q1YjExMGE2NTcxNWJkNjcwMmE0NGM2MWQzMDQ1NmE2MWMwZmJlYzE4OTM4MzkzZjZjNjQwMgE7H48=:
00:22:51.463   13:48:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:ODEyZmE1YmJhN2Q1YjExMGE2NTcxNWJkNjcwMmE0NGM2MWQzMDQ1NmE2MWMwZmJlYzE4OTM4MzkzZjZjNjQwMgE7H48=:
00:22:52.400   13:48:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:22:52.400  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:22:52.400   13:48:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:22:52.400   13:48:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:52.400   13:48:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:52.400   13:48:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:52.400   13:48:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:22:52.400   13:48:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:22:52.400   13:48:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144
00:22:52.400   13:48:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144
00:22:52.400   13:48:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0
00:22:52.400   13:48:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:22:52.400   13:48:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:22:52.400   13:48:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144
00:22:52.400   13:48:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:22:52.400   13:48:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:22:52.400   13:48:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:22:52.400   13:48:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:52.400   13:48:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:52.400   13:48:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:52.400   13:48:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:22:52.400   13:48:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:22:52.400   13:48:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:22:52.969  
00:22:52.969    13:48:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:22:52.969    13:48:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:22:52.969    13:48:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:22:52.969   13:48:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:22:52.969    13:48:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:22:52.969    13:48:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:52.969    13:48:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:52.969    13:48:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:52.969   13:48:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:22:52.969  {
00:22:52.969  "cntlid": 129,
00:22:52.969  "qid": 0,
00:22:52.969  "state": "enabled",
00:22:52.969  "thread": "nvmf_tgt_poll_group_000",
00:22:52.969  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:22:52.969  "listen_address": {
00:22:52.969  "trtype": "RDMA",
00:22:52.969  "adrfam": "IPv4",
00:22:52.969  "traddr": "192.168.100.8",
00:22:52.969  "trsvcid": "4420"
00:22:52.969  },
00:22:52.969  "peer_address": {
00:22:52.969  "trtype": "RDMA",
00:22:52.969  "adrfam": "IPv4",
00:22:52.969  "traddr": "192.168.100.8",
00:22:52.969  "trsvcid": "38092"
00:22:52.969  },
00:22:52.969  "auth": {
00:22:52.969  "state": "completed",
00:22:52.969  "digest": "sha512",
00:22:52.969  "dhgroup": "ffdhe6144"
00:22:52.969  }
00:22:52.969  }
00:22:52.969  ]'
00:22:52.970    13:48:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:22:52.970   13:48:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:22:52.970    13:48:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:22:53.228   13:48:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]]
00:22:53.228    13:48:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:22:53.228   13:48:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:22:53.228   13:48:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:22:53.228   13:48:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:22:53.486   13:48:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjMwMWZmNWI5YmY1NDQ3ZDU2ZmYyYzM4ZTQxYjczZGJhYzcwZTE2ZDBmOTE5MTM0FgVPJA==: --dhchap-ctrl-secret DHHC-1:03:MDIyYTIwZDcxMDg5MTY4YTQ0NWYwYzE5YzJkYmE1NGU3MTcxNDI4YTdlMTJhMDA5NzgzZmNkOTE5OTA0OWNmOY86vbo=:
00:22:53.486   13:48:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:ZjMwMWZmNWI5YmY1NDQ3ZDU2ZmYyYzM4ZTQxYjczZGJhYzcwZTE2ZDBmOTE5MTM0FgVPJA==: --dhchap-ctrl-secret DHHC-1:03:MDIyYTIwZDcxMDg5MTY4YTQ0NWYwYzE5YzJkYmE1NGU3MTcxNDI4YTdlMTJhMDA5NzgzZmNkOTE5OTA0OWNmOY86vbo=:
00:22:54.054   13:48:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:22:54.054  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:22:54.054   13:48:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:22:54.054   13:48:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:54.054   13:48:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:54.054   13:48:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:54.054   13:48:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:22:54.054   13:48:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144
00:22:54.054   13:48:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144
00:22:54.313   13:48:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1
00:22:54.313   13:48:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:22:54.313   13:48:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:22:54.313   13:48:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144
00:22:54.313   13:48:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:22:54.313   13:48:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:22:54.313   13:48:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:22:54.313   13:48:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:54.313   13:48:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:54.313   13:48:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:54.313   13:48:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:22:54.313   13:48:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:22:54.313   13:48:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:22:54.572  
00:22:54.572    13:48:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:22:54.572    13:48:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:22:54.572    13:48:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:22:54.832   13:48:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:22:54.832    13:48:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:22:54.832    13:48:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:54.832    13:48:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:54.832    13:48:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:54.832   13:48:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:22:54.832  {
00:22:54.832  "cntlid": 131,
00:22:54.832  "qid": 0,
00:22:54.832  "state": "enabled",
00:22:54.832  "thread": "nvmf_tgt_poll_group_000",
00:22:54.832  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:22:54.832  "listen_address": {
00:22:54.832  "trtype": "RDMA",
00:22:54.832  "adrfam": "IPv4",
00:22:54.832  "traddr": "192.168.100.8",
00:22:54.832  "trsvcid": "4420"
00:22:54.832  },
00:22:54.832  "peer_address": {
00:22:54.832  "trtype": "RDMA",
00:22:54.832  "adrfam": "IPv4",
00:22:54.832  "traddr": "192.168.100.8",
00:22:54.832  "trsvcid": "59737"
00:22:54.832  },
00:22:54.832  "auth": {
00:22:54.832  "state": "completed",
00:22:54.832  "digest": "sha512",
00:22:54.832  "dhgroup": "ffdhe6144"
00:22:54.832  }
00:22:54.832  }
00:22:54.832  ]'
00:22:54.832    13:48:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:22:54.832   13:48:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:22:54.832    13:48:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:22:54.832   13:48:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]]
00:22:54.832    13:48:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:22:55.092   13:48:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:22:55.092   13:48:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:22:55.092   13:48:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:22:55.092   13:48:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg4OWVkZGIwZGRiZDNlNmNkZGRkYmYyNTRjZTFhZDaPKOr9: --dhchap-ctrl-secret DHHC-1:02:ZjY3MzRkZjJjMDM5MzIxMzUzZjliOTczNjVhNjUzMjA0YWVjYWEwMTIwZjNkM2Q17wM3AA==:
00:22:55.092   13:48:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:Mjg4OWVkZGIwZGRiZDNlNmNkZGRkYmYyNTRjZTFhZDaPKOr9: --dhchap-ctrl-secret DHHC-1:02:ZjY3MzRkZjJjMDM5MzIxMzUzZjliOTczNjVhNjUzMjA0YWVjYWEwMTIwZjNkM2Q17wM3AA==:
00:22:55.660   13:48:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:22:55.920  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:22:55.920   13:48:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:22:55.920   13:48:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:55.920   13:48:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:55.920   13:48:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:55.920   13:48:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:22:55.920   13:48:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144
00:22:55.920   13:48:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144
00:22:56.256   13:48:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2
00:22:56.256   13:48:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:22:56.256   13:48:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:22:56.256   13:48:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144
00:22:56.256   13:48:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:22:56.256   13:48:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:22:56.256   13:48:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:22:56.256   13:48:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:56.256   13:48:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:56.256   13:48:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:56.256   13:48:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:22:56.256   13:48:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:22:56.256   13:48:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:22:56.516  
00:22:56.516    13:48:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:22:56.516    13:48:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:22:56.516    13:48:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:22:56.776   13:48:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:22:56.776    13:48:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:22:56.776    13:48:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:56.776    13:48:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:56.776    13:48:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:56.776   13:48:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:22:56.776  {
00:22:56.776  "cntlid": 133,
00:22:56.776  "qid": 0,
00:22:56.776  "state": "enabled",
00:22:56.776  "thread": "nvmf_tgt_poll_group_000",
00:22:56.776  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:22:56.776  "listen_address": {
00:22:56.776  "trtype": "RDMA",
00:22:56.776  "adrfam": "IPv4",
00:22:56.776  "traddr": "192.168.100.8",
00:22:56.776  "trsvcid": "4420"
00:22:56.776  },
00:22:56.776  "peer_address": {
00:22:56.776  "trtype": "RDMA",
00:22:56.776  "adrfam": "IPv4",
00:22:56.776  "traddr": "192.168.100.8",
00:22:56.776  "trsvcid": "37494"
00:22:56.776  },
00:22:56.776  "auth": {
00:22:56.776  "state": "completed",
00:22:56.776  "digest": "sha512",
00:22:56.776  "dhgroup": "ffdhe6144"
00:22:56.776  }
00:22:56.776  }
00:22:56.776  ]'
00:22:56.776    13:48:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:22:56.776   13:48:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:22:56.776    13:48:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:22:56.776   13:48:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]]
00:22:56.776    13:48:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:22:56.776   13:48:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:22:56.776   13:48:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:22:56.776   13:48:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:22:57.035   13:48:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjM4MWU5M2YzYjVjZDQzMTcwM2YxMzliNDdiYjc0ODQ5MmQ0OTBiODBiNzZhNzIwxZi5CA==: --dhchap-ctrl-secret DHHC-1:01:ODExNTY5YTA4ODM1OGNmN2ViMzE0MjM2OThkZjk3MTCjwNhC:
00:22:57.035   13:48:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YjM4MWU5M2YzYjVjZDQzMTcwM2YxMzliNDdiYjc0ODQ5MmQ0OTBiODBiNzZhNzIwxZi5CA==: --dhchap-ctrl-secret DHHC-1:01:ODExNTY5YTA4ODM1OGNmN2ViMzE0MjM2OThkZjk3MTCjwNhC:
00:22:57.603   13:48:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:22:57.603  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:22:57.603   13:48:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:22:57.603   13:48:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:57.603   13:48:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:57.603   13:48:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:57.603   13:48:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:22:57.603   13:48:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144
00:22:57.603   13:48:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144
00:22:57.863   13:48:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3
00:22:57.863   13:48:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:22:57.863   13:48:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:22:57.863   13:48:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144
00:22:57.863   13:48:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:22:57.863   13:48:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:22:57.863   13:48:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3
00:22:57.863   13:48:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:57.863   13:48:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:57.863   13:48:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:57.863   13:48:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:22:57.863   13:48:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:22:57.863   13:48:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:22:58.122  
00:22:58.381    13:48:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:22:58.381    13:48:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:22:58.381    13:48:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:22:58.381   13:48:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:22:58.381    13:48:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:22:58.381    13:48:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:58.381    13:48:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:58.381    13:48:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:58.381   13:48:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:22:58.381  {
00:22:58.381  "cntlid": 135,
00:22:58.381  "qid": 0,
00:22:58.381  "state": "enabled",
00:22:58.381  "thread": "nvmf_tgt_poll_group_000",
00:22:58.381  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:22:58.381  "listen_address": {
00:22:58.381  "trtype": "RDMA",
00:22:58.381  "adrfam": "IPv4",
00:22:58.381  "traddr": "192.168.100.8",
00:22:58.381  "trsvcid": "4420"
00:22:58.382  },
00:22:58.382  "peer_address": {
00:22:58.382  "trtype": "RDMA",
00:22:58.382  "adrfam": "IPv4",
00:22:58.382  "traddr": "192.168.100.8",
00:22:58.382  "trsvcid": "38658"
00:22:58.382  },
00:22:58.382  "auth": {
00:22:58.382  "state": "completed",
00:22:58.382  "digest": "sha512",
00:22:58.382  "dhgroup": "ffdhe6144"
00:22:58.382  }
00:22:58.382  }
00:22:58.382  ]'
00:22:58.382    13:48:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:22:58.382   13:48:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:22:58.382    13:48:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:22:58.641   13:48:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]]
00:22:58.641    13:48:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:22:58.641   13:48:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:22:58.641   13:48:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:22:58.641   13:48:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:22:58.900   13:48:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODEyZmE1YmJhN2Q1YjExMGE2NTcxNWJkNjcwMmE0NGM2MWQzMDQ1NmE2MWMwZmJlYzE4OTM4MzkzZjZjNjQwMgE7H48=:
00:22:58.900   13:48:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:ODEyZmE1YmJhN2Q1YjExMGE2NTcxNWJkNjcwMmE0NGM2MWQzMDQ1NmE2MWMwZmJlYzE4OTM4MzkzZjZjNjQwMgE7H48=:
00:22:59.468   13:48:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:22:59.468  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:22:59.468   13:48:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:22:59.469   13:48:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:59.469   13:48:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:59.469   13:48:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:59.469   13:48:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:22:59.469   13:48:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:22:59.469   13:48:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192
00:22:59.469   13:48:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192
00:22:59.728   13:48:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0
00:22:59.728   13:48:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:22:59.728   13:48:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:22:59.728   13:48:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192
00:22:59.728   13:48:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:22:59.728   13:48:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:22:59.728   13:48:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:22:59.728   13:48:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:59.728   13:48:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:59.728   13:48:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:59.728   13:48:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:22:59.728   13:48:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:22:59.728   13:48:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:23:00.296  
00:23:00.296    13:48:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:23:00.296    13:48:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:23:00.296    13:48:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:23:00.296   13:48:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:23:00.296    13:48:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:23:00.296    13:48:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:00.296    13:48:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:23:00.296    13:48:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:00.296   13:49:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:23:00.296  {
00:23:00.296  "cntlid": 137,
00:23:00.296  "qid": 0,
00:23:00.296  "state": "enabled",
00:23:00.296  "thread": "nvmf_tgt_poll_group_000",
00:23:00.296  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:23:00.296  "listen_address": {
00:23:00.296  "trtype": "RDMA",
00:23:00.296  "adrfam": "IPv4",
00:23:00.296  "traddr": "192.168.100.8",
00:23:00.296  "trsvcid": "4420"
00:23:00.296  },
00:23:00.296  "peer_address": {
00:23:00.296  "trtype": "RDMA",
00:23:00.296  "adrfam": "IPv4",
00:23:00.296  "traddr": "192.168.100.8",
00:23:00.296  "trsvcid": "55039"
00:23:00.296  },
00:23:00.296  "auth": {
00:23:00.296  "state": "completed",
00:23:00.296  "digest": "sha512",
00:23:00.296  "dhgroup": "ffdhe8192"
00:23:00.296  }
00:23:00.296  }
00:23:00.296  ]'
00:23:00.296    13:49:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:23:00.555   13:49:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:23:00.555    13:49:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:23:00.555   13:49:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]]
00:23:00.555    13:49:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:23:00.555   13:49:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:23:00.555   13:49:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:23:00.555   13:49:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:23:00.814   13:49:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjMwMWZmNWI5YmY1NDQ3ZDU2ZmYyYzM4ZTQxYjczZGJhYzcwZTE2ZDBmOTE5MTM0FgVPJA==: --dhchap-ctrl-secret DHHC-1:03:MDIyYTIwZDcxMDg5MTY4YTQ0NWYwYzE5YzJkYmE1NGU3MTcxNDI4YTdlMTJhMDA5NzgzZmNkOTE5OTA0OWNmOY86vbo=:
00:23:00.814   13:49:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:ZjMwMWZmNWI5YmY1NDQ3ZDU2ZmYyYzM4ZTQxYjczZGJhYzcwZTE2ZDBmOTE5MTM0FgVPJA==: --dhchap-ctrl-secret DHHC-1:03:MDIyYTIwZDcxMDg5MTY4YTQ0NWYwYzE5YzJkYmE1NGU3MTcxNDI4YTdlMTJhMDA5NzgzZmNkOTE5OTA0OWNmOY86vbo=:
00:23:01.383   13:49:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:23:01.383  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:23:01.383   13:49:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:23:01.383   13:49:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:01.383   13:49:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:23:01.383   13:49:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:01.383   13:49:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:23:01.383   13:49:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192
00:23:01.383   13:49:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192
00:23:01.642   13:49:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1
00:23:01.642   13:49:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:23:01.642   13:49:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:23:01.642   13:49:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192
00:23:01.642   13:49:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:23:01.642   13:49:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:23:01.642   13:49:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:23:01.642   13:49:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:01.642   13:49:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:23:01.642   13:49:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:01.642   13:49:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:23:01.642   13:49:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:23:01.642   13:49:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:23:02.211  
00:23:02.211    13:49:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:23:02.211    13:49:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:23:02.211    13:49:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:23:02.471   13:49:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:23:02.471    13:49:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:23:02.471    13:49:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:02.471    13:49:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:23:02.471    13:49:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:02.471   13:49:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:23:02.471  {
00:23:02.471  "cntlid": 139,
00:23:02.471  "qid": 0,
00:23:02.471  "state": "enabled",
00:23:02.471  "thread": "nvmf_tgt_poll_group_000",
00:23:02.471  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:23:02.471  "listen_address": {
00:23:02.471  "trtype": "RDMA",
00:23:02.471  "adrfam": "IPv4",
00:23:02.471  "traddr": "192.168.100.8",
00:23:02.471  "trsvcid": "4420"
00:23:02.471  },
00:23:02.471  "peer_address": {
00:23:02.471  "trtype": "RDMA",
00:23:02.471  "adrfam": "IPv4",
00:23:02.471  "traddr": "192.168.100.8",
00:23:02.471  "trsvcid": "33625"
00:23:02.471  },
00:23:02.471  "auth": {
00:23:02.471  "state": "completed",
00:23:02.471  "digest": "sha512",
00:23:02.471  "dhgroup": "ffdhe8192"
00:23:02.471  }
00:23:02.471  }
00:23:02.471  ]'
00:23:02.471    13:49:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:23:02.471   13:49:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:23:02.471    13:49:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:23:02.471   13:49:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]]
00:23:02.471    13:49:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:23:02.471   13:49:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:23:02.471   13:49:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:23:02.471   13:49:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:23:02.730   13:49:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg4OWVkZGIwZGRiZDNlNmNkZGRkYmYyNTRjZTFhZDaPKOr9: --dhchap-ctrl-secret DHHC-1:02:ZjY3MzRkZjJjMDM5MzIxMzUzZjliOTczNjVhNjUzMjA0YWVjYWEwMTIwZjNkM2Q17wM3AA==:
00:23:02.730   13:49:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:Mjg4OWVkZGIwZGRiZDNlNmNkZGRkYmYyNTRjZTFhZDaPKOr9: --dhchap-ctrl-secret DHHC-1:02:ZjY3MzRkZjJjMDM5MzIxMzUzZjliOTczNjVhNjUzMjA0YWVjYWEwMTIwZjNkM2Q17wM3AA==:
00:23:03.299   13:49:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:23:03.299  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:23:03.299   13:49:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:23:03.299   13:49:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:03.299   13:49:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:23:03.299   13:49:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:03.299   13:49:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:23:03.299   13:49:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192
00:23:03.299   13:49:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192
00:23:03.558   13:49:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2
00:23:03.558   13:49:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:23:03.558   13:49:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:23:03.558   13:49:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192
00:23:03.558   13:49:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:23:03.558   13:49:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:23:03.558   13:49:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:23:03.558   13:49:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:03.558   13:49:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:23:03.558   13:49:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:03.558   13:49:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:23:03.558   13:49:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:23:03.558   13:49:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:23:04.127  
00:23:04.127    13:49:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:23:04.127    13:49:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:23:04.127    13:49:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:23:04.387   13:49:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:23:04.387    13:49:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:23:04.387    13:49:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:04.387    13:49:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:23:04.387    13:49:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:04.387   13:49:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:23:04.387  {
00:23:04.387  "cntlid": 141,
00:23:04.387  "qid": 0,
00:23:04.387  "state": "enabled",
00:23:04.387  "thread": "nvmf_tgt_poll_group_000",
00:23:04.387  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:23:04.387  "listen_address": {
00:23:04.387  "trtype": "RDMA",
00:23:04.387  "adrfam": "IPv4",
00:23:04.387  "traddr": "192.168.100.8",
00:23:04.387  "trsvcid": "4420"
00:23:04.387  },
00:23:04.387  "peer_address": {
00:23:04.387  "trtype": "RDMA",
00:23:04.387  "adrfam": "IPv4",
00:23:04.387  "traddr": "192.168.100.8",
00:23:04.387  "trsvcid": "45383"
00:23:04.387  },
00:23:04.387  "auth": {
00:23:04.387  "state": "completed",
00:23:04.387  "digest": "sha512",
00:23:04.387  "dhgroup": "ffdhe8192"
00:23:04.387  }
00:23:04.387  }
00:23:04.387  ]'
00:23:04.387    13:49:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:23:04.387   13:49:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:23:04.387    13:49:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:23:04.387   13:49:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]]
00:23:04.387    13:49:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:23:04.387   13:49:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:23:04.387   13:49:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:23:04.387   13:49:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:23:04.646   13:49:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjM4MWU5M2YzYjVjZDQzMTcwM2YxMzliNDdiYjc0ODQ5MmQ0OTBiODBiNzZhNzIwxZi5CA==: --dhchap-ctrl-secret DHHC-1:01:ODExNTY5YTA4ODM1OGNmN2ViMzE0MjM2OThkZjk3MTCjwNhC:
00:23:04.647   13:49:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YjM4MWU5M2YzYjVjZDQzMTcwM2YxMzliNDdiYjc0ODQ5MmQ0OTBiODBiNzZhNzIwxZi5CA==: --dhchap-ctrl-secret DHHC-1:01:ODExNTY5YTA4ODM1OGNmN2ViMzE0MjM2OThkZjk3MTCjwNhC:
00:23:05.215   13:49:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:23:05.477  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:23:05.477   13:49:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:23:05.477   13:49:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:05.477   13:49:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:23:05.477   13:49:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:05.477   13:49:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:23:05.477   13:49:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192
00:23:05.477   13:49:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192
00:23:05.477   13:49:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3
00:23:05.477   13:49:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:23:05.477   13:49:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:23:05.477   13:49:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192
00:23:05.477   13:49:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:23:05.478   13:49:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:23:05.478   13:49:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3
00:23:05.478   13:49:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:05.478   13:49:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:23:05.737   13:49:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:05.737   13:49:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:23:05.738   13:49:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:23:05.738   13:49:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:23:05.996  
00:23:05.996    13:49:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:23:05.996    13:49:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:23:05.996    13:49:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:23:06.256   13:49:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:23:06.256    13:49:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:23:06.256    13:49:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:06.256    13:49:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:23:06.256    13:49:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:06.256   13:49:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:23:06.256  {
00:23:06.256  "cntlid": 143,
00:23:06.256  "qid": 0,
00:23:06.256  "state": "enabled",
00:23:06.256  "thread": "nvmf_tgt_poll_group_000",
00:23:06.256  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:23:06.256  "listen_address": {
00:23:06.256  "trtype": "RDMA",
00:23:06.256  "adrfam": "IPv4",
00:23:06.256  "traddr": "192.168.100.8",
00:23:06.256  "trsvcid": "4420"
00:23:06.256  },
00:23:06.256  "peer_address": {
00:23:06.256  "trtype": "RDMA",
00:23:06.256  "adrfam": "IPv4",
00:23:06.256  "traddr": "192.168.100.8",
00:23:06.256  "trsvcid": "52317"
00:23:06.256  },
00:23:06.256  "auth": {
00:23:06.256  "state": "completed",
00:23:06.256  "digest": "sha512",
00:23:06.256  "dhgroup": "ffdhe8192"
00:23:06.256  }
00:23:06.256  }
00:23:06.256  ]'
00:23:06.256    13:49:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:23:06.256   13:49:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:23:06.256    13:49:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:23:06.256   13:49:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]]
00:23:06.515    13:49:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:23:06.515   13:49:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:23:06.515   13:49:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:23:06.515   13:49:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:23:06.515   13:49:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODEyZmE1YmJhN2Q1YjExMGE2NTcxNWJkNjcwMmE0NGM2MWQzMDQ1NmE2MWMwZmJlYzE4OTM4MzkzZjZjNjQwMgE7H48=:
00:23:06.515   13:49:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:ODEyZmE1YmJhN2Q1YjExMGE2NTcxNWJkNjcwMmE0NGM2MWQzMDQ1NmE2MWMwZmJlYzE4OTM4MzkzZjZjNjQwMgE7H48=:
00:23:07.453   13:49:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:23:07.453  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:23:07.453   13:49:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:23:07.453   13:49:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:07.453   13:49:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:23:07.453   13:49:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:07.453    13:49:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=,
00:23:07.453    13:49:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512
00:23:07.453    13:49:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=,
00:23:07.453    13:49:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192
00:23:07.453   13:49:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192
00:23:07.453   13:49:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192
00:23:07.453   13:49:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0
00:23:07.453   13:49:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:23:07.453   13:49:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:23:07.454   13:49:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192
00:23:07.454   13:49:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:23:07.454   13:49:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:23:07.454   13:49:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:23:07.454   13:49:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:07.454   13:49:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:23:07.454   13:49:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:07.454   13:49:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:23:07.454   13:49:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:23:07.454   13:49:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:23:08.022  
00:23:08.022    13:49:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:23:08.022    13:49:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:23:08.022    13:49:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:23:08.281   13:49:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:23:08.281    13:49:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:23:08.281    13:49:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:08.281    13:49:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:23:08.281    13:49:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:08.281   13:49:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:23:08.281  {
00:23:08.281  "cntlid": 145,
00:23:08.281  "qid": 0,
00:23:08.281  "state": "enabled",
00:23:08.281  "thread": "nvmf_tgt_poll_group_000",
00:23:08.281  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:23:08.281  "listen_address": {
00:23:08.281  "trtype": "RDMA",
00:23:08.281  "adrfam": "IPv4",
00:23:08.281  "traddr": "192.168.100.8",
00:23:08.281  "trsvcid": "4420"
00:23:08.281  },
00:23:08.281  "peer_address": {
00:23:08.281  "trtype": "RDMA",
00:23:08.281  "adrfam": "IPv4",
00:23:08.281  "traddr": "192.168.100.8",
00:23:08.281  "trsvcid": "58589"
00:23:08.281  },
00:23:08.281  "auth": {
00:23:08.281  "state": "completed",
00:23:08.281  "digest": "sha512",
00:23:08.281  "dhgroup": "ffdhe8192"
00:23:08.281  }
00:23:08.281  }
00:23:08.281  ]'
00:23:08.281    13:49:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:23:08.281   13:49:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:23:08.281    13:49:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:23:08.281   13:49:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]]
00:23:08.281    13:49:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:23:08.281   13:49:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:23:08.281   13:49:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:23:08.281   13:49:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:23:08.540   13:49:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjMwMWZmNWI5YmY1NDQ3ZDU2ZmYyYzM4ZTQxYjczZGJhYzcwZTE2ZDBmOTE5MTM0FgVPJA==: --dhchap-ctrl-secret DHHC-1:03:MDIyYTIwZDcxMDg5MTY4YTQ0NWYwYzE5YzJkYmE1NGU3MTcxNDI4YTdlMTJhMDA5NzgzZmNkOTE5OTA0OWNmOY86vbo=:
00:23:08.540   13:49:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:ZjMwMWZmNWI5YmY1NDQ3ZDU2ZmYyYzM4ZTQxYjczZGJhYzcwZTE2ZDBmOTE5MTM0FgVPJA==: --dhchap-ctrl-secret DHHC-1:03:MDIyYTIwZDcxMDg5MTY4YTQ0NWYwYzE5YzJkYmE1NGU3MTcxNDI4YTdlMTJhMDA5NzgzZmNkOTE5OTA0OWNmOY86vbo=:
00:23:09.108   13:49:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:23:09.367  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:23:09.367   13:49:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:23:09.367   13:49:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:09.367   13:49:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:23:09.367   13:49:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:09.367   13:49:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1
00:23:09.367   13:49:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:09.367   13:49:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:23:09.367   13:49:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:09.367   13:49:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2
00:23:09.367   13:49:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0
00:23:09.367   13:49:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2
00:23:09.367   13:49:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect
00:23:09.367   13:49:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:23:09.367    13:49:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect
00:23:09.367   13:49:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:23:09.367   13:49:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2
00:23:09.367   13:49:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2
00:23:09.367   13:49:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2
00:23:09.937  request:
00:23:09.937  {
00:23:09.937    "name": "nvme0",
00:23:09.937    "trtype": "rdma",
00:23:09.937    "traddr": "192.168.100.8",
00:23:09.937    "adrfam": "ipv4",
00:23:09.937    "trsvcid": "4420",
00:23:09.937    "subnqn": "nqn.2024-03.io.spdk:cnode0",
00:23:09.937    "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:23:09.937    "prchk_reftag": false,
00:23:09.937    "prchk_guard": false,
00:23:09.937    "hdgst": false,
00:23:09.937    "ddgst": false,
00:23:09.937    "dhchap_key": "key2",
00:23:09.937    "allow_unrecognized_csi": false,
00:23:09.937    "method": "bdev_nvme_attach_controller",
00:23:09.937    "req_id": 1
00:23:09.937  }
00:23:09.937  Got JSON-RPC error response
00:23:09.937  response:
00:23:09.937  {
00:23:09.937    "code": -5,
00:23:09.937    "message": "Input/output error"
00:23:09.937  }
00:23:09.937   13:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1
00:23:09.937   13:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:23:09.937   13:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:23:09.937   13:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:23:09.937   13:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:23:09.937   13:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:09.937   13:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:23:09.937   13:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:09.937   13:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:23:09.937   13:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:09.937   13:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:23:09.937   13:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:09.937   13:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2
00:23:09.937   13:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0
00:23:09.937   13:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2
00:23:09.937   13:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect
00:23:09.937   13:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:23:09.937    13:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect
00:23:09.937   13:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:23:09.937   13:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2
00:23:09.937   13:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2
00:23:09.937   13:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2
00:23:10.196  request:
00:23:10.196  {
00:23:10.196    "name": "nvme0",
00:23:10.196    "trtype": "rdma",
00:23:10.196    "traddr": "192.168.100.8",
00:23:10.196    "adrfam": "ipv4",
00:23:10.196    "trsvcid": "4420",
00:23:10.197    "subnqn": "nqn.2024-03.io.spdk:cnode0",
00:23:10.197    "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:23:10.197    "prchk_reftag": false,
00:23:10.197    "prchk_guard": false,
00:23:10.197    "hdgst": false,
00:23:10.197    "ddgst": false,
00:23:10.197    "dhchap_key": "key1",
00:23:10.197    "dhchap_ctrlr_key": "ckey2",
00:23:10.197    "allow_unrecognized_csi": false,
00:23:10.197    "method": "bdev_nvme_attach_controller",
00:23:10.197    "req_id": 1
00:23:10.197  }
00:23:10.197  Got JSON-RPC error response
00:23:10.197  response:
00:23:10.197  {
00:23:10.197    "code": -5,
00:23:10.197    "message": "Input/output error"
00:23:10.197  }
00:23:10.456   13:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1
00:23:10.456   13:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:23:10.456   13:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:23:10.456   13:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:23:10.456   13:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:23:10.456   13:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:10.456   13:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:23:10.456   13:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:10.456   13:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1
00:23:10.456   13:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:10.456   13:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:23:10.456   13:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:10.456   13:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:23:10.456   13:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0
00:23:10.456   13:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:23:10.456   13:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect
00:23:10.456   13:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:23:10.456    13:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect
00:23:10.456   13:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:23:10.456   13:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:23:10.456   13:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:23:10.456   13:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:23:11.024  request:
00:23:11.024  {
00:23:11.024    "name": "nvme0",
00:23:11.024    "trtype": "rdma",
00:23:11.024    "traddr": "192.168.100.8",
00:23:11.024    "adrfam": "ipv4",
00:23:11.024    "trsvcid": "4420",
00:23:11.024    "subnqn": "nqn.2024-03.io.spdk:cnode0",
00:23:11.024    "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:23:11.024    "prchk_reftag": false,
00:23:11.024    "prchk_guard": false,
00:23:11.024    "hdgst": false,
00:23:11.024    "ddgst": false,
00:23:11.024    "dhchap_key": "key1",
00:23:11.024    "dhchap_ctrlr_key": "ckey1",
00:23:11.024    "allow_unrecognized_csi": false,
00:23:11.024    "method": "bdev_nvme_attach_controller",
00:23:11.024    "req_id": 1
00:23:11.024  }
00:23:11.024  Got JSON-RPC error response
00:23:11.024  response:
00:23:11.024  {
00:23:11.024    "code": -5,
00:23:11.024    "message": "Input/output error"
00:23:11.024  }
00:23:11.024   13:49:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1
00:23:11.024   13:49:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:23:11.024   13:49:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:23:11.024   13:49:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:23:11.025   13:49:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:23:11.025   13:49:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:11.025   13:49:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:23:11.025   13:49:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:11.025   13:49:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 3344750
00:23:11.025   13:49:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3344750 ']'
00:23:11.025   13:49:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3344750
00:23:11.025    13:49:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname
00:23:11.025   13:49:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:23:11.025    13:49:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3344750
00:23:11.025   13:49:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:23:11.025   13:49:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:23:11.025   13:49:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3344750'
00:23:11.025  killing process with pid 3344750
00:23:11.025   13:49:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3344750
00:23:11.025   13:49:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3344750
00:23:12.403   13:49:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth
00:23:12.403   13:49:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:23:12.403   13:49:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable
00:23:12.403   13:49:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:23:12.403   13:49:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3369721
00:23:12.403   13:49:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3369721
00:23:12.403   13:49:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth
00:23:12.403   13:49:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3369721 ']'
00:23:12.403   13:49:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:23:12.403   13:49:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100
00:23:12.403   13:49:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:23:12.403   13:49:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable
00:23:12.403   13:49:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:23:12.971   13:49:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:23:12.971   13:49:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0
00:23:12.971   13:49:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:23:12.971   13:49:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable
00:23:12.971   13:49:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:23:13.231   13:49:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:23:13.231   13:49:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT
00:23:13.231   13:49:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 3369721
00:23:13.231   13:49:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3369721 ']'
00:23:13.231   13:49:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:23:13.231   13:49:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100
00:23:13.231   13:49:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:23:13.231  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:23:13.231   13:49:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable
00:23:13.231   13:49:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:23:13.231   13:49:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:23:13.231   13:49:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0
00:23:13.231   13:49:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd
00:23:13.231   13:49:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:13.231   13:49:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:23:13.800  null0
00:23:13.800   13:49:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:13.800   13:49:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}"
00:23:13.800   13:49:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.oNa
00:23:13.800   13:49:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:13.800   13:49:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:23:13.801   13:49:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:13.801   13:49:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.gsB ]]
00:23:13.801   13:49:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.gsB
00:23:13.801   13:49:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:13.801   13:49:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:23:13.801   13:49:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:13.801   13:49:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}"
00:23:13.801   13:49:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.495
00:23:13.801   13:49:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:13.801   13:49:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:23:13.801   13:49:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:13.801   13:49:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.L66 ]]
00:23:13.801   13:49:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.L66
00:23:13.801   13:49:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:13.801   13:49:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:23:13.801   13:49:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:13.801   13:49:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}"
00:23:13.801   13:49:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Uv3
00:23:13.801   13:49:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:13.801   13:49:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:23:13.801   13:49:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:13.801   13:49:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.suE ]]
00:23:13.801   13:49:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.suE
00:23:13.801   13:49:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:13.801   13:49:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:23:13.801   13:49:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:13.801   13:49:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}"
00:23:13.801   13:49:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.jIm
00:23:13.801   13:49:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:13.801   13:49:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:23:13.801   13:49:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:13.801   13:49:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]]
00:23:13.801   13:49:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3
00:23:13.801   13:49:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:23:13.801   13:49:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:23:13.801   13:49:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192
00:23:13.801   13:49:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:23:13.801   13:49:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:23:13.801   13:49:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3
00:23:13.801   13:49:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:13.801   13:49:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:23:13.801   13:49:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:13.801   13:49:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:23:13.801   13:49:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:23:13.801   13:49:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:23:14.739  nvme0n1
00:23:14.739    13:49:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:23:14.739    13:49:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:23:14.739    13:49:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:23:14.739   13:49:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:23:14.739    13:49:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:23:14.739    13:49:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:14.739    13:49:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:23:14.739    13:49:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:14.739   13:49:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:23:14.739  {
00:23:14.739  "cntlid": 1,
00:23:14.739  "qid": 0,
00:23:14.739  "state": "enabled",
00:23:14.739  "thread": "nvmf_tgt_poll_group_000",
00:23:14.739  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:23:14.739  "listen_address": {
00:23:14.739  "trtype": "RDMA",
00:23:14.739  "adrfam": "IPv4",
00:23:14.739  "traddr": "192.168.100.8",
00:23:14.739  "trsvcid": "4420"
00:23:14.739  },
00:23:14.739  "peer_address": {
00:23:14.739  "trtype": "RDMA",
00:23:14.739  "adrfam": "IPv4",
00:23:14.739  "traddr": "192.168.100.8",
00:23:14.739  "trsvcid": "53754"
00:23:14.739  },
00:23:14.739  "auth": {
00:23:14.739  "state": "completed",
00:23:14.739  "digest": "sha512",
00:23:14.739  "dhgroup": "ffdhe8192"
00:23:14.739  }
00:23:14.739  }
00:23:14.739  ]'
00:23:14.739    13:49:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:23:14.739   13:49:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:23:14.739    13:49:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:23:14.999   13:49:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]]
00:23:14.999    13:49:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:23:14.999   13:49:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:23:14.999   13:49:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:23:14.999   13:49:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:23:15.259   13:49:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODEyZmE1YmJhN2Q1YjExMGE2NTcxNWJkNjcwMmE0NGM2MWQzMDQ1NmE2MWMwZmJlYzE4OTM4MzkzZjZjNjQwMgE7H48=:
00:23:15.259   13:49:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:ODEyZmE1YmJhN2Q1YjExMGE2NTcxNWJkNjcwMmE0NGM2MWQzMDQ1NmE2MWMwZmJlYzE4OTM4MzkzZjZjNjQwMgE7H48=:
00:23:15.827   13:49:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:23:16.086  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:23:16.086   13:49:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:23:16.086   13:49:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:16.086   13:49:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:23:16.086   13:49:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:16.086   13:49:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3
00:23:16.086   13:49:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:16.086   13:49:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:23:16.086   13:49:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:16.086   13:49:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256
00:23:16.086   13:49:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256
00:23:16.086   13:49:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3
00:23:16.086   13:49:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0
00:23:16.086   13:49:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3
00:23:16.086   13:49:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect
00:23:16.086   13:49:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:23:16.086    13:49:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect
00:23:16.087   13:49:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:23:16.087   13:49:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3
00:23:16.087   13:49:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:23:16.087   13:49:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:23:16.346  request:
00:23:16.346  {
00:23:16.346    "name": "nvme0",
00:23:16.346    "trtype": "rdma",
00:23:16.346    "traddr": "192.168.100.8",
00:23:16.346    "adrfam": "ipv4",
00:23:16.346    "trsvcid": "4420",
00:23:16.346    "subnqn": "nqn.2024-03.io.spdk:cnode0",
00:23:16.346    "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:23:16.346    "prchk_reftag": false,
00:23:16.346    "prchk_guard": false,
00:23:16.346    "hdgst": false,
00:23:16.346    "ddgst": false,
00:23:16.346    "dhchap_key": "key3",
00:23:16.346    "allow_unrecognized_csi": false,
00:23:16.346    "method": "bdev_nvme_attach_controller",
00:23:16.346    "req_id": 1
00:23:16.346  }
00:23:16.346  Got JSON-RPC error response
00:23:16.346  response:
00:23:16.346  {
00:23:16.346    "code": -5,
00:23:16.346    "message": "Input/output error"
00:23:16.346  }
00:23:16.346   13:49:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1
00:23:16.346   13:49:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:23:16.346   13:49:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:23:16.346   13:49:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:23:16.346    13:49:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=,
00:23:16.346    13:49:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512
00:23:16.346   13:49:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512
00:23:16.346   13:49:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512
00:23:16.606   13:49:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3
00:23:16.606   13:49:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0
00:23:16.606   13:49:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3
00:23:16.606   13:49:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect
00:23:16.606   13:49:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:23:16.606    13:49:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect
00:23:16.606   13:49:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:23:16.606   13:49:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3
00:23:16.606   13:49:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:23:16.606   13:49:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:23:16.865  request:
00:23:16.865  {
00:23:16.865    "name": "nvme0",
00:23:16.865    "trtype": "rdma",
00:23:16.865    "traddr": "192.168.100.8",
00:23:16.865    "adrfam": "ipv4",
00:23:16.865    "trsvcid": "4420",
00:23:16.865    "subnqn": "nqn.2024-03.io.spdk:cnode0",
00:23:16.865    "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:23:16.865    "prchk_reftag": false,
00:23:16.865    "prchk_guard": false,
00:23:16.865    "hdgst": false,
00:23:16.865    "ddgst": false,
00:23:16.865    "dhchap_key": "key3",
00:23:16.865    "allow_unrecognized_csi": false,
00:23:16.866    "method": "bdev_nvme_attach_controller",
00:23:16.866    "req_id": 1
00:23:16.866  }
00:23:16.866  Got JSON-RPC error response
00:23:16.866  response:
00:23:16.866  {
00:23:16.866    "code": -5,
00:23:16.866    "message": "Input/output error"
00:23:16.866  }
00:23:16.866   13:49:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1
00:23:16.866   13:49:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:23:16.866   13:49:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:23:16.866   13:49:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:23:16.866    13:49:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=,
00:23:16.866    13:49:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512
00:23:16.866    13:49:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=,
00:23:16.866    13:49:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192
00:23:16.866   13:49:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192
00:23:16.866   13:49:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192
00:23:17.125   13:49:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:23:17.125   13:49:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:17.125   13:49:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:23:17.125   13:49:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:17.125   13:49:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:23:17.125   13:49:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:17.125   13:49:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:23:17.125   13:49:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:17.125   13:49:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1
00:23:17.125   13:49:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0
00:23:17.125   13:49:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1
00:23:17.125   13:49:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect
00:23:17.125   13:49:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:23:17.125    13:49:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect
00:23:17.125   13:49:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:23:17.125   13:49:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1
00:23:17.125   13:49:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1
00:23:17.125   13:49:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1
00:23:17.385  request:
00:23:17.385  {
00:23:17.385    "name": "nvme0",
00:23:17.385    "trtype": "rdma",
00:23:17.385    "traddr": "192.168.100.8",
00:23:17.385    "adrfam": "ipv4",
00:23:17.385    "trsvcid": "4420",
00:23:17.385    "subnqn": "nqn.2024-03.io.spdk:cnode0",
00:23:17.385    "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:23:17.385    "prchk_reftag": false,
00:23:17.385    "prchk_guard": false,
00:23:17.385    "hdgst": false,
00:23:17.385    "ddgst": false,
00:23:17.385    "dhchap_key": "key0",
00:23:17.385    "dhchap_ctrlr_key": "key1",
00:23:17.385    "allow_unrecognized_csi": false,
00:23:17.385    "method": "bdev_nvme_attach_controller",
00:23:17.385    "req_id": 1
00:23:17.385  }
00:23:17.385  Got JSON-RPC error response
00:23:17.385  response:
00:23:17.385  {
00:23:17.385    "code": -5,
00:23:17.385    "message": "Input/output error"
00:23:17.385  }
00:23:17.385   13:49:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1
00:23:17.385   13:49:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:23:17.385   13:49:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:23:17.385   13:49:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:23:17.385   13:49:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0
00:23:17.385   13:49:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0
00:23:17.385   13:49:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0
00:23:17.644  nvme0n1
00:23:17.644    13:49:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers
00:23:17.644    13:49:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name'
00:23:17.644    13:49:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:23:17.903   13:49:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:23:17.903   13:49:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0
00:23:17.903   13:49:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:23:18.162   13:49:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1
00:23:18.162   13:49:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:18.162   13:49:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:23:18.162   13:49:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:18.162   13:49:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1
00:23:18.162   13:49:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1
00:23:18.162   13:49:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1
00:23:19.099  nvme0n1
00:23:19.099    13:49:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers
00:23:19.099    13:49:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name'
00:23:19.099    13:49:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:23:19.099   13:49:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:23:19.099   13:49:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key key3
00:23:19.099   13:49:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:19.099   13:49:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:23:19.099   13:49:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:19.099    13:49:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers
00:23:19.099    13:49:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:23:19.099    13:49:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name'
00:23:19.358   13:49:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:23:19.358   13:49:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YjM4MWU5M2YzYjVjZDQzMTcwM2YxMzliNDdiYjc0ODQ5MmQ0OTBiODBiNzZhNzIwxZi5CA==: --dhchap-ctrl-secret DHHC-1:03:ODEyZmE1YmJhN2Q1YjExMGE2NTcxNWJkNjcwMmE0NGM2MWQzMDQ1NmE2MWMwZmJlYzE4OTM4MzkzZjZjNjQwMgE7H48=:
00:23:19.359   13:49:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YjM4MWU5M2YzYjVjZDQzMTcwM2YxMzliNDdiYjc0ODQ5MmQ0OTBiODBiNzZhNzIwxZi5CA==: --dhchap-ctrl-secret DHHC-1:03:ODEyZmE1YmJhN2Q1YjExMGE2NTcxNWJkNjcwMmE0NGM2MWQzMDQ1NmE2MWMwZmJlYzE4OTM4MzkzZjZjNjQwMgE7H48=:
00:23:19.927    13:49:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr
00:23:19.927    13:49:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev
00:23:19.927    13:49:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme*
00:23:19.927    13:49:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]]
00:23:19.927    13:49:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0
00:23:19.927    13:49:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break
00:23:19.927   13:49:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0
00:23:19.927   13:49:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0
00:23:19.927   13:49:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:23:20.186   13:49:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1
00:23:20.186   13:49:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0
00:23:20.186   13:49:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1
00:23:20.186   13:49:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect
00:23:20.186   13:49:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:23:20.186    13:49:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect
00:23:20.186   13:49:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:23:20.186   13:49:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1
00:23:20.186   13:49:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1
00:23:20.186   13:49:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1
00:23:20.802  request:
00:23:20.802  {
00:23:20.802    "name": "nvme0",
00:23:20.802    "trtype": "rdma",
00:23:20.802    "traddr": "192.168.100.8",
00:23:20.802    "adrfam": "ipv4",
00:23:20.802    "trsvcid": "4420",
00:23:20.802    "subnqn": "nqn.2024-03.io.spdk:cnode0",
00:23:20.802    "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e",
00:23:20.802    "prchk_reftag": false,
00:23:20.802    "prchk_guard": false,
00:23:20.802    "hdgst": false,
00:23:20.802    "ddgst": false,
00:23:20.802    "dhchap_key": "key1",
00:23:20.802    "allow_unrecognized_csi": false,
00:23:20.802    "method": "bdev_nvme_attach_controller",
00:23:20.802    "req_id": 1
00:23:20.802  }
00:23:20.802  Got JSON-RPC error response
00:23:20.802  response:
00:23:20.802  {
00:23:20.802    "code": -5,
00:23:20.802    "message": "Input/output error"
00:23:20.802  }
00:23:20.802   13:49:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1
00:23:20.802   13:49:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:23:20.802   13:49:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:23:20.802   13:49:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:23:20.802   13:49:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3
00:23:20.802   13:49:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3
00:23:20.802   13:49:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3
00:23:21.438  nvme0n1
00:23:21.438    13:49:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers
00:23:21.438    13:49:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name'
00:23:21.438    13:49:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:23:21.438   13:49:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:23:21.438   13:49:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0
00:23:21.438   13:49:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:23:21.698   13:49:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:23:21.698   13:49:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:21.698   13:49:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:23:21.698   13:49:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:21.698   13:49:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0
00:23:21.698   13:49:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0
00:23:21.698   13:49:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0
00:23:21.957  nvme0n1
00:23:21.957    13:49:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers
00:23:21.957    13:49:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name'
00:23:21.957    13:49:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:23:22.216   13:49:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:23:22.216   13:49:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0
00:23:22.216   13:49:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:23:22.476   13:49:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key key3
00:23:22.476   13:49:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:22.476   13:49:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:23:22.476   13:49:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:22.476   13:49:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:Mjg4OWVkZGIwZGRiZDNlNmNkZGRkYmYyNTRjZTFhZDaPKOr9: '' 2s
00:23:22.476   13:49:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout
00:23:22.476   13:49:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0
00:23:22.476   13:49:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:Mjg4OWVkZGIwZGRiZDNlNmNkZGRkYmYyNTRjZTFhZDaPKOr9:
00:23:22.476   13:49:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=
00:23:22.476   13:49:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s
00:23:22.476   13:49:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0
00:23:22.476   13:49:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:Mjg4OWVkZGIwZGRiZDNlNmNkZGRkYmYyNTRjZTFhZDaPKOr9: ]]
00:23:22.476   13:49:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:Mjg4OWVkZGIwZGRiZDNlNmNkZGRkYmYyNTRjZTFhZDaPKOr9:
00:23:22.476   13:49:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]]
00:23:22.476   13:49:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]]
00:23:22.476   13:49:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s
00:23:24.383   13:49:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1
00:23:24.383   13:49:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0
00:23:24.384   13:49:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME
00:23:24.384   13:49:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1
00:23:24.643   13:49:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME
00:23:24.643   13:49:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1
00:23:24.643   13:49:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0
00:23:24.643   13:49:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key key2
00:23:24.643   13:49:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:24.643   13:49:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:23:24.643   13:49:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:24.643   13:49:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YjM4MWU5M2YzYjVjZDQzMTcwM2YxMzliNDdiYjc0ODQ5MmQ0OTBiODBiNzZhNzIwxZi5CA==: 2s
00:23:24.643   13:49:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout
00:23:24.643   13:49:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0
00:23:24.643   13:49:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=
00:23:24.643   13:49:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YjM4MWU5M2YzYjVjZDQzMTcwM2YxMzliNDdiYjc0ODQ5MmQ0OTBiODBiNzZhNzIwxZi5CA==:
00:23:24.643   13:49:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s
00:23:24.643   13:49:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0
00:23:24.643   13:49:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]]
00:23:24.643   13:49:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YjM4MWU5M2YzYjVjZDQzMTcwM2YxMzliNDdiYjc0ODQ5MmQ0OTBiODBiNzZhNzIwxZi5CA==: ]]
00:23:24.643   13:49:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YjM4MWU5M2YzYjVjZDQzMTcwM2YxMzliNDdiYjc0ODQ5MmQ0OTBiODBiNzZhNzIwxZi5CA==:
00:23:24.643   13:49:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]]
00:23:24.643   13:49:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s
00:23:26.547   13:49:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1
00:23:26.547   13:49:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0
00:23:26.547   13:49:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME
00:23:26.547   13:49:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1
00:23:26.547   13:49:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME
00:23:26.547   13:49:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1
00:23:26.547   13:49:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0
00:23:26.547   13:49:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:23:26.807  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:23:26.807   13:49:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key key1
00:23:26.807   13:49:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:26.807   13:49:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:23:26.807   13:49:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:26.807   13:49:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1
00:23:26.807   13:49:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1
00:23:26.807   13:49:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1
00:23:27.376  nvme0n1
00:23:27.376   13:49:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key key3
00:23:27.376   13:49:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:27.376   13:49:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:23:27.376   13:49:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:27.376   13:49:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3
00:23:27.376   13:49:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3
00:23:27.946    13:49:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers
00:23:27.946    13:49:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name'
00:23:27.946    13:49:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:23:28.205   13:49:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:23:28.205   13:49:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:23:28.205   13:49:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:28.205   13:49:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:23:28.205   13:49:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:28.205   13:49:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0
00:23:28.205   13:49:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0
00:23:28.205    13:49:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers
00:23:28.205    13:49:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name'
00:23:28.205    13:49:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:23:28.465   13:49:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:23:28.465   13:49:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key key3
00:23:28.465   13:49:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:28.465   13:49:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:23:28.465   13:49:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:28.465   13:49:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3
00:23:28.465   13:49:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0
00:23:28.465   13:49:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3
00:23:28.465   13:49:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc
00:23:28.465   13:49:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:23:28.465    13:49:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc
00:23:28.465   13:49:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:23:28.465   13:49:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3
00:23:28.465   13:49:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3
00:23:29.033  request:
00:23:29.033  {
00:23:29.033    "name": "nvme0",
00:23:29.033    "dhchap_key": "key1",
00:23:29.033    "dhchap_ctrlr_key": "key3",
00:23:29.033    "method": "bdev_nvme_set_keys",
00:23:29.033    "req_id": 1
00:23:29.033  }
00:23:29.033  Got JSON-RPC error response
00:23:29.033  response:
00:23:29.033  {
00:23:29.033    "code": -13,
00:23:29.033    "message": "Permission denied"
00:23:29.033  }
00:23:29.033   13:49:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1
00:23:29.033   13:49:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:23:29.033   13:49:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:23:29.033   13:49:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:23:29.033    13:49:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers
00:23:29.033    13:49:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length
00:23:29.033    13:49:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:23:29.033   13:49:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 ))
00:23:29.033   13:49:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s
00:23:30.412    13:49:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers
00:23:30.412    13:49:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length
00:23:30.412    13:49:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:23:30.412   13:49:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 ))
00:23:30.412   13:49:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key key1
00:23:30.412   13:49:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:30.412   13:49:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:23:30.412   13:49:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:30.412   13:49:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1
00:23:30.412   13:49:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1
00:23:30.412   13:49:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1
00:23:30.981  nvme0n1
00:23:30.981   13:49:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key key3
00:23:30.981   13:49:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:30.981   13:49:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:23:30.981   13:49:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:30.981   13:49:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0
00:23:30.981   13:49:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0
00:23:30.981   13:49:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0
00:23:30.981   13:49:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc
00:23:30.981   13:49:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:23:30.981    13:49:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc
00:23:30.981   13:49:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:23:30.981   13:49:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0
00:23:30.981   13:49:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0
00:23:31.551  request:
00:23:31.551  {
00:23:31.551    "name": "nvme0",
00:23:31.551    "dhchap_key": "key2",
00:23:31.551    "dhchap_ctrlr_key": "key0",
00:23:31.551    "method": "bdev_nvme_set_keys",
00:23:31.551    "req_id": 1
00:23:31.551  }
00:23:31.551  Got JSON-RPC error response
00:23:31.551  response:
00:23:31.551  {
00:23:31.551    "code": -13,
00:23:31.551    "message": "Permission denied"
00:23:31.551  }
00:23:31.551   13:49:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1
00:23:31.551   13:49:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:23:31.551   13:49:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:23:31.551   13:49:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:23:31.551    13:49:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers
00:23:31.551    13:49:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length
00:23:31.551    13:49:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:23:31.810   13:49:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 ))
00:23:31.810   13:49:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s
00:23:32.749    13:49:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers
00:23:32.749    13:49:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length
00:23:32.749    13:49:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:23:33.008   13:49:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 ))
00:23:33.008   13:49:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT
00:23:33.008   13:49:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup
00:23:33.008   13:49:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3344899
00:23:33.008   13:49:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3344899 ']'
00:23:33.008   13:49:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3344899
00:23:33.008    13:49:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname
00:23:33.008   13:49:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:23:33.008    13:49:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3344899
00:23:33.008   13:49:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:23:33.008   13:49:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:23:33.008   13:49:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3344899'
00:23:33.008  killing process with pid 3344899
00:23:33.008   13:49:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3344899
00:23:33.008   13:49:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3344899
00:23:35.545   13:49:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini
00:23:35.545   13:49:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup
00:23:35.545   13:49:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync
00:23:35.545   13:49:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']'
00:23:35.545   13:49:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']'
00:23:35.545   13:49:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e
00:23:35.545   13:49:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20}
00:23:35.545   13:49:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma
00:23:35.545  rmmod nvme_rdma
00:23:35.545  rmmod nvme_fabrics
00:23:35.545   13:49:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:23:35.545   13:49:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e
00:23:35.545   13:49:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0
00:23:35.545   13:49:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 3369721 ']'
00:23:35.545   13:49:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 3369721
00:23:35.545   13:49:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3369721 ']'
00:23:35.545   13:49:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3369721
00:23:35.545    13:49:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname
00:23:35.545   13:49:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:23:35.545    13:49:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3369721
00:23:35.545   13:49:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:23:35.545   13:49:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:23:35.545   13:49:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3369721'
00:23:35.545  killing process with pid 3369721
00:23:35.545   13:49:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3369721
00:23:35.545   13:49:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3369721
00:23:36.482   13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:23:36.482   13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]]
00:23:36.482   13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.oNa /tmp/spdk.key-sha256.495 /tmp/spdk.key-sha384.Uv3 /tmp/spdk.key-sha512.jIm /tmp/spdk.key-sha512.gsB /tmp/spdk.key-sha384.L66 /tmp/spdk.key-sha256.suE '' /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf-auth.log
00:23:36.482  
00:23:36.482  real	2m49.297s
00:23:36.482  user	6m23.862s
00:23:36.482  sys	0m25.200s
00:23:36.482   13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable
00:23:36.482   13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:23:36.482  ************************************
00:23:36.482  END TEST nvmf_auth_target
00:23:36.482  ************************************
00:23:36.483   13:49:36 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' rdma = tcp ']'
00:23:36.483   13:49:36 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']'
00:23:36.483   13:49:36 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma
00:23:36.483   13:49:36 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:23:36.483   13:49:36 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:23:36.483   13:49:36 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:23:36.743  ************************************
00:23:36.743  START TEST nvmf_fuzz
00:23:36.743  ************************************
00:23:36.743   13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma
00:23:36.743  * Looking for test storage...
00:23:36.743  * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target
00:23:36.743    13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:23:36.743     13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lcov --version
00:23:36.743     13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:23:36.743    13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:23:36.743    13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:23:36.743    13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l
00:23:36.743    13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l
00:23:36.743    13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-:
00:23:36.743    13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1
00:23:36.743    13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-:
00:23:36.743    13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2
00:23:36.743    13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<'
00:23:36.743    13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2
00:23:36.743    13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1
00:23:36.743    13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:23:36.743    13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in
00:23:36.743    13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1
00:23:36.743    13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 ))
00:23:36.743    13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:23:36.743     13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1
00:23:36.743     13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1
00:23:36.743     13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:23:36.743     13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1
00:23:36.743    13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1
00:23:36.743     13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2
00:23:36.743     13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2
00:23:36.743     13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:23:36.743     13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2
00:23:36.743    13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2
00:23:36.743    13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:23:36.743    13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:23:36.743    13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0
00:23:36.743    13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:23:36.743    13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:23:36.743  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:23:36.743  		--rc genhtml_branch_coverage=1
00:23:36.743  		--rc genhtml_function_coverage=1
00:23:36.743  		--rc genhtml_legend=1
00:23:36.743  		--rc geninfo_all_blocks=1
00:23:36.743  		--rc geninfo_unexecuted_blocks=1
00:23:36.743  		
00:23:36.743  		'
00:23:36.743    13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:23:36.743  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:23:36.743  		--rc genhtml_branch_coverage=1
00:23:36.743  		--rc genhtml_function_coverage=1
00:23:36.743  		--rc genhtml_legend=1
00:23:36.743  		--rc geninfo_all_blocks=1
00:23:36.743  		--rc geninfo_unexecuted_blocks=1
00:23:36.743  		
00:23:36.743  		'
00:23:36.743    13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:23:36.743  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:23:36.743  		--rc genhtml_branch_coverage=1
00:23:36.743  		--rc genhtml_function_coverage=1
00:23:36.743  		--rc genhtml_legend=1
00:23:36.743  		--rc geninfo_all_blocks=1
00:23:36.743  		--rc geninfo_unexecuted_blocks=1
00:23:36.743  		
00:23:36.743  		'
00:23:36.743    13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:23:36.743  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:23:36.743  		--rc genhtml_branch_coverage=1
00:23:36.744  		--rc genhtml_function_coverage=1
00:23:36.744  		--rc genhtml_legend=1
00:23:36.744  		--rc geninfo_all_blocks=1
00:23:36.744  		--rc geninfo_unexecuted_blocks=1
00:23:36.744  		
00:23:36.744  		'
00:23:36.744   13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh
00:23:36.744     13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s
00:23:36.744    13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:23:36.744    13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:23:36.744    13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:23:36.744    13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:23:36.744    13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:23:36.744    13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:23:36.744    13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:23:36.744    13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:23:36.744    13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:23:36.744     13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:23:36.744    13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:23:36.744    13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e
00:23:36.744    13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:23:36.744    13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:23:36.744    13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:23:36.744    13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:23:36.744    13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh
00:23:36.744     13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob
00:23:36.744     13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:23:36.744     13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:23:36.744     13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:23:36.744      13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:23:36.744      13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:23:36.744      13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:23:36.744      13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH
00:23:36.744      13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:23:36.744    13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0
00:23:36.744    13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:23:36.744    13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:23:36.744    13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:23:36.744    13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:23:36.744    13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:23:36.744    13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:23:36.744  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:23:36.744    13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:23:36.744    13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:23:36.744    13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0
00:23:36.744   13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit
00:23:36.744   13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z rdma ']'
00:23:36.744   13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:23:36.744   13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs
00:23:36.744   13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no
00:23:36.744   13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns
00:23:36.744   13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:23:36.744   13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:23:36.744    13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:23:36.744   13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:23:36.744   13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:23:36.744   13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable
00:23:36.744   13:49:36 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x
00:23:43.315   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:23:43.315   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=()
00:23:43.315   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs
00:23:43.315   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=()
00:23:43.315   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:23:43.315   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=()
00:23:43.315   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers
00:23:43.315   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=()
00:23:43.315   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs
00:23:43.315   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=()
00:23:43.315   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810
00:23:43.315   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=()
00:23:43.315   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722
00:23:43.315   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=()
00:23:43.315   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx
00:23:43.315   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:23:43.315   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:23:43.315   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:23:43.315   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:23:43.315   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:23:43.315   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:23:43.315   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:23:43.315   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:23:43.315   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:23:43.315   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:23:43.315   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:23:43.315   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:23:43.315   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:23:43.315   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # [[ rdma == rdma ]]
00:23:43.315   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}")
00:23:43.315   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}")
00:23:43.315   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]]
00:23:43.315   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}")
00:23:43.315   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:23:43.315   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:23:43.315   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)'
00:23:43.315  Found 0000:d9:00.0 (0x15b3 - 0x1015)
00:23:43.315   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:23:43.315   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:23:43.315   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:23:43.315   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:23:43.315   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:23:43.315   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:23:43.315   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:23:43.315   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)'
00:23:43.315  Found 0000:d9:00.1 (0x15b3 - 0x1015)
00:23:43.315   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:23:43.315   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:23:43.315   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:23:43.316   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:23:43.316   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:23:43.316   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:23:43.316   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:23:43.316   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]]
00:23:43.316   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:23:43.316   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:23:43.316   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:23:43.316   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:23:43.316   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:23:43.316   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0'
00:23:43.316  Found net devices under 0000:d9:00.0: mlx_0_0
00:23:43.316   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:23:43.316   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:23:43.316   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:23:43.316   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:23:43.316   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:23:43.316   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:23:43.316   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1'
00:23:43.316  Found net devices under 0000:d9:00.1: mlx_0_1
00:23:43.316   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:23:43.316   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:23:43.316   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # is_hw=yes
00:23:43.316   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:23:43.316   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@445 -- # [[ rdma == tcp ]]
00:23:43.316   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@447 -- # [[ rdma == rdma ]]
00:23:43.316   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@448 -- # rdma_device_init
00:23:43.316   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@529 -- # load_ib_rdma_modules
00:23:43.316    13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@62 -- # uname
00:23:43.316   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']'
00:23:43.316   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@66 -- # modprobe ib_cm
00:23:43.316   13:49:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@67 -- # modprobe ib_core
00:23:43.316   13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@68 -- # modprobe ib_umad
00:23:43.316   13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@69 -- # modprobe ib_uverbs
00:23:43.316   13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@70 -- # modprobe iw_cm
00:23:43.316   13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@71 -- # modprobe rdma_cm
00:23:43.316   13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@72 -- # modprobe rdma_ucm
00:23:43.316   13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@530 -- # allocate_nic_ips
00:23:43.316   13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR ))
00:23:43.316    13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@77 -- # get_rdma_if_list
00:23:43.316    13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:23:43.316    13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:23:43.316     13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:23:43.316     13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:23:43.576    13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:23:43.576    13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:23:43.576    13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:23:43.576    13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:23:43.576    13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@108 -- # echo mlx_0_0
00:23:43.576    13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@109 -- # continue 2
00:23:43.576    13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:23:43.576    13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:23:43.576    13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:23:43.576    13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:23:43.576    13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:23:43.576    13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@108 -- # echo mlx_0_1
00:23:43.576    13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@109 -- # continue 2
00:23:43.576   13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:23:43.576    13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0
00:23:43.576    13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:23:43.576    13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:23:43.576    13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # awk '{print $4}'
00:23:43.576    13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # cut -d/ -f1
00:23:43.576   13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@78 -- # ip=192.168.100.8
00:23:43.576   13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]]
00:23:43.576   13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@85 -- # ip addr show mlx_0_0
00:23:43.576  6: mlx_0_0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:23:43.576      link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff
00:23:43.576      altname enp217s0f0np0
00:23:43.576      altname ens818f0np0
00:23:43.576      inet 192.168.100.8/24 scope global mlx_0_0
00:23:43.576         valid_lft forever preferred_lft forever
00:23:43.576   13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:23:43.576    13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1
00:23:43.576    13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:23:43.576    13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # cut -d/ -f1
00:23:43.576    13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:23:43.576    13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # awk '{print $4}'
00:23:43.576   13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@78 -- # ip=192.168.100.9
00:23:43.576   13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]]
00:23:43.576   13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@85 -- # ip addr show mlx_0_1
00:23:43.576  7: mlx_0_1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:23:43.576      link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff
00:23:43.576      altname enp217s0f1np1
00:23:43.576      altname ens818f1np1
00:23:43.576      inet 192.168.100.9/24 scope global mlx_0_1
00:23:43.576         valid_lft forever preferred_lft forever
00:23:43.576   13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # return 0
00:23:43.576   13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:23:43.576   13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma'
00:23:43.576   13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]]
00:23:43.576    13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@484 -- # get_available_rdma_ips
00:23:43.576     13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@90 -- # get_rdma_if_list
00:23:43.576     13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:23:43.576     13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:23:43.576      13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:23:43.576      13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:23:43.576     13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:23:43.576     13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:23:43.576     13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:23:43.576     13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:23:43.576     13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@108 -- # echo mlx_0_0
00:23:43.576     13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@109 -- # continue 2
00:23:43.577     13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:23:43.577     13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:23:43.577     13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:23:43.577     13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:23:43.577     13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:23:43.577     13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@108 -- # echo mlx_0_1
00:23:43.577     13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@109 -- # continue 2
00:23:43.577    13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:23:43.577    13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0
00:23:43.577    13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:23:43.577    13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:23:43.577    13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # awk '{print $4}'
00:23:43.577    13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # cut -d/ -f1
00:23:43.577    13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:23:43.577    13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1
00:23:43.577    13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:23:43.577    13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:23:43.577    13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # awk '{print $4}'
00:23:43.577    13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # cut -d/ -f1
00:23:43.577   13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8
00:23:43.577  192.168.100.9'
00:23:43.577    13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@485 -- # echo '192.168.100.8
00:23:43.577  192.168.100.9'
00:23:43.577    13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@485 -- # head -n 1
00:23:43.577   13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8
00:23:43.577    13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@486 -- # echo '192.168.100.8
00:23:43.577  192.168.100.9'
00:23:43.577    13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@486 -- # tail -n +2
00:23:43.577    13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@486 -- # head -n 1
00:23:43.577   13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9
00:23:43.577   13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']'
00:23:43.577   13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024'
00:23:43.577   13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' rdma == tcp ']'
00:23:43.577   13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' rdma == rdma ']'
00:23:43.577   13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-rdma
00:23:43.577   13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=3377278
00:23:43.577   13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1
00:23:43.577   13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT
00:23:43.577   13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 3377278
00:23:43.577   13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # '[' -z 3377278 ']'
00:23:43.577   13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:23:43.577   13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100
00:23:43.577   13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:23:43.577  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:23:43.577   13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable
00:23:43.577   13:49:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x
00:23:44.515   13:49:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:23:44.515   13:49:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@868 -- # return 0
00:23:44.515   13:49:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192
00:23:44.515   13:49:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:44.515   13:49:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x
00:23:44.515   13:49:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:44.515   13:49:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512
00:23:44.515   13:49:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:44.515   13:49:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x
00:23:44.775  Malloc0
00:23:44.775   13:49:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:44.775   13:49:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:23:44.775   13:49:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:44.775   13:49:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x
00:23:44.775   13:49:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:44.775   13:49:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:23:44.775   13:49:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:44.775   13:49:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x
00:23:44.775   13:49:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:44.775   13:49:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420
00:23:44.775   13:49:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:44.775   13:49:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x
00:23:44.775   13:49:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:44.775   13:49:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420'
00:23:44.775   13:49:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -N -a
00:24:16.862  Fuzzing completed. Shutting down the fuzz application
00:24:16.862  
00:24:16.862  Dumping successful admin opcodes:
00:24:16.862  9, 10, 
00:24:16.862  Dumping successful io opcodes:
00:24:16.862  0, 9, 
00:24:16.862  NS: 0x2000008eeec0 I/O qp, Total commands completed: 861675, total successful commands: 5010, random_seed: 2594053888
00:24:16.862  NS: 0x2000008eeec0 admin qp, Total commands completed: 127328, total successful commands: 29, random_seed: 949602368
00:24:16.862   13:50:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -j /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a
00:24:17.121  Fuzzing completed. Shutting down the fuzz application
00:24:17.121  
00:24:17.121  Dumping successful admin opcodes:
00:24:17.121  
00:24:17.121  Dumping successful io opcodes:
00:24:17.121  
00:24:17.122  NS: 0x2000008eeec0 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 3995774606
00:24:17.122  NS: 0x2000008eeec0 admin qp, Total commands completed: 16, total successful commands: 0, random_seed: 3995870664
00:24:17.122   13:50:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:24:17.122   13:50:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:17.122   13:50:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x
00:24:17.122   13:50:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:17.122   13:50:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT
00:24:17.122   13:50:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini
00:24:17.122   13:50:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup
00:24:17.122   13:50:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync
00:24:17.122   13:50:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' rdma == tcp ']'
00:24:17.122   13:50:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' rdma == rdma ']'
00:24:17.122   13:50:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e
00:24:17.122   13:50:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20}
00:24:17.122   13:50:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma
00:24:17.122  rmmod nvme_rdma
00:24:17.122  rmmod nvme_fabrics
00:24:17.122   13:50:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:24:17.122   13:50:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e
00:24:17.122   13:50:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0
00:24:17.122   13:50:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 3377278 ']'
00:24:17.122   13:50:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 3377278
00:24:17.122   13:50:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' -z 3377278 ']'
00:24:17.122   13:50:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # kill -0 3377278
00:24:17.122    13:50:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # uname
00:24:17.122   13:50:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:24:17.122    13:50:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3377278
00:24:17.122   13:50:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:24:17.122   13:50:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:24:17.122   13:50:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3377278'
00:24:17.122  killing process with pid 3377278
00:24:17.122   13:50:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@973 -- # kill 3377278
00:24:17.122   13:50:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@978 -- # wait 3377278
00:24:18.501   13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:24:18.501   13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]]
00:24:18.501   13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt
00:24:18.760  
00:24:18.760  real	0m42.052s
00:24:18.760  user	0m56.301s
00:24:18.760  sys	0m18.003s
00:24:18.760   13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable
00:24:18.760   13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x
00:24:18.760  ************************************
00:24:18.760  END TEST nvmf_fuzz
00:24:18.760  ************************************
00:24:18.760   13:50:18 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma
00:24:18.760   13:50:18 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:24:18.760   13:50:18 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:24:18.760   13:50:18 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:24:18.760  ************************************
00:24:18.760  START TEST nvmf_multiconnection
00:24:18.760  ************************************
00:24:18.760   13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma
00:24:18.760  * Looking for test storage...
00:24:18.760  * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target
00:24:18.760    13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:24:18.760     13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lcov --version
00:24:18.760     13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:24:19.093    13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:24:19.093    13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:24:19.093    13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l
00:24:19.093    13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l
00:24:19.093    13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-:
00:24:19.093    13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1
00:24:19.093    13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-:
00:24:19.093    13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2
00:24:19.093    13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<'
00:24:19.093    13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2
00:24:19.093    13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1
00:24:19.093    13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:24:19.093    13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in
00:24:19.093    13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1
00:24:19.093    13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 ))
00:24:19.093    13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:24:19.093     13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1
00:24:19.093     13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1
00:24:19.093     13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:24:19.093     13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1
00:24:19.093    13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1
00:24:19.093     13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2
00:24:19.093     13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2
00:24:19.093     13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:24:19.093     13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2
00:24:19.093    13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2
00:24:19.093    13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:24:19.093    13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:24:19.093    13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0
00:24:19.093    13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:24:19.093    13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:24:19.093  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:24:19.093  		--rc genhtml_branch_coverage=1
00:24:19.093  		--rc genhtml_function_coverage=1
00:24:19.093  		--rc genhtml_legend=1
00:24:19.093  		--rc geninfo_all_blocks=1
00:24:19.093  		--rc geninfo_unexecuted_blocks=1
00:24:19.093  		
00:24:19.093  		'
00:24:19.093    13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:24:19.093  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:24:19.093  		--rc genhtml_branch_coverage=1
00:24:19.093  		--rc genhtml_function_coverage=1
00:24:19.093  		--rc genhtml_legend=1
00:24:19.093  		--rc geninfo_all_blocks=1
00:24:19.093  		--rc geninfo_unexecuted_blocks=1
00:24:19.093  		
00:24:19.093  		'
00:24:19.093    13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:24:19.093  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:24:19.093  		--rc genhtml_branch_coverage=1
00:24:19.093  		--rc genhtml_function_coverage=1
00:24:19.093  		--rc genhtml_legend=1
00:24:19.093  		--rc geninfo_all_blocks=1
00:24:19.093  		--rc geninfo_unexecuted_blocks=1
00:24:19.093  		
00:24:19.093  		'
00:24:19.093    13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:24:19.093  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:24:19.093  		--rc genhtml_branch_coverage=1
00:24:19.093  		--rc genhtml_function_coverage=1
00:24:19.093  		--rc genhtml_legend=1
00:24:19.093  		--rc geninfo_all_blocks=1
00:24:19.093  		--rc geninfo_unexecuted_blocks=1
00:24:19.093  		
00:24:19.094  		'
00:24:19.094   13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh
00:24:19.094     13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s
00:24:19.094    13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:24:19.094    13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:24:19.094    13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:24:19.094    13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:24:19.094    13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:24:19.094    13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:24:19.094    13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:24:19.094    13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:24:19.094    13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:24:19.094     13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:24:19.094    13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:24:19.094    13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e
00:24:19.094    13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:24:19.094    13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:24:19.094    13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:24:19.094    13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:24:19.094    13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh
00:24:19.094     13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob
00:24:19.094     13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:24:19.094     13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:24:19.094     13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:24:19.094      13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:24:19.094      13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:24:19.094      13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:24:19.094      13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH
00:24:19.094      13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:24:19.094    13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0
00:24:19.094    13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:24:19.094    13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:24:19.094    13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:24:19.094    13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:24:19.094    13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:24:19.094    13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:24:19.094  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:24:19.094    13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:24:19.094    13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:24:19.094    13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0
00:24:19.094   13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64
00:24:19.094   13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:24:19.094   13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11
00:24:19.094   13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit
00:24:19.094   13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z rdma ']'
00:24:19.094   13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:24:19.094   13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs
00:24:19.094   13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no
00:24:19.094   13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns
00:24:19.094   13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:24:19.094   13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:24:19.094    13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:24:19.094   13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:24:19.094   13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:24:19.094   13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable
00:24:19.094   13:50:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=()
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=()
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=()
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=()
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=()
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=()
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=()
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # [[ rdma == rdma ]]
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}")
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}")
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]]
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}")
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)'
00:24:25.676  Found 0000:d9:00.0 (0x15b3 - 0x1015)
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)'
00:24:25.676  Found 0000:d9:00.1 (0x15b3 - 0x1015)
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]]
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0'
00:24:25.676  Found net devices under 0000:d9:00.0: mlx_0_0
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1'
00:24:25.676  Found net devices under 0000:d9:00.1: mlx_0_1
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # is_hw=yes
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@445 -- # [[ rdma == tcp ]]
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@447 -- # [[ rdma == rdma ]]
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@448 -- # rdma_device_init
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@529 -- # load_ib_rdma_modules
00:24:25.676    13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@62 -- # uname
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']'
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@66 -- # modprobe ib_cm
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@67 -- # modprobe ib_core
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@68 -- # modprobe ib_umad
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@69 -- # modprobe ib_uverbs
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@70 -- # modprobe iw_cm
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@71 -- # modprobe rdma_cm
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@72 -- # modprobe rdma_ucm
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@530 -- # allocate_nic_ips
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR ))
00:24:25.676    13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@77 -- # get_rdma_if_list
00:24:25.676    13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:24:25.676    13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:24:25.676     13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:24:25.676     13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:24:25.676    13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:24:25.676    13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:24:25.676    13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:24:25.676    13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:24:25.676    13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@108 -- # echo mlx_0_0
00:24:25.676    13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@109 -- # continue 2
00:24:25.676    13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:24:25.676    13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:24:25.676    13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:24:25.676    13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:24:25.676    13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:24:25.676    13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@108 -- # echo mlx_0_1
00:24:25.676    13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@109 -- # continue 2
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:24:25.676    13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0
00:24:25.676    13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:24:25.676    13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:24:25.676    13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # awk '{print $4}'
00:24:25.676    13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # cut -d/ -f1
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@78 -- # ip=192.168.100.8
00:24:25.676   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]]
00:24:25.677   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@85 -- # ip addr show mlx_0_0
00:24:25.677  6: mlx_0_0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:24:25.677      link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff
00:24:25.677      altname enp217s0f0np0
00:24:25.677      altname ens818f0np0
00:24:25.677      inet 192.168.100.8/24 scope global mlx_0_0
00:24:25.677         valid_lft forever preferred_lft forever
00:24:25.677   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:24:25.677    13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1
00:24:25.677    13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:24:25.677    13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:24:25.677    13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # awk '{print $4}'
00:24:25.677    13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # cut -d/ -f1
00:24:25.677   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@78 -- # ip=192.168.100.9
00:24:25.677   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]]
00:24:25.677   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@85 -- # ip addr show mlx_0_1
00:24:25.677  7: mlx_0_1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:24:25.677      link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff
00:24:25.677      altname enp217s0f1np1
00:24:25.677      altname ens818f1np1
00:24:25.677      inet 192.168.100.9/24 scope global mlx_0_1
00:24:25.677         valid_lft forever preferred_lft forever
00:24:25.677   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # return 0
00:24:25.677   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:24:25.677   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma'
00:24:25.677   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]]
00:24:25.677    13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@484 -- # get_available_rdma_ips
00:24:25.677     13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@90 -- # get_rdma_if_list
00:24:25.677     13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:24:25.677     13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:24:25.677      13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:24:25.677      13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:24:25.677     13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:24:25.677     13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:24:25.677     13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:24:25.677     13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:24:25.677     13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@108 -- # echo mlx_0_0
00:24:25.677     13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@109 -- # continue 2
00:24:25.677     13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:24:25.677     13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:24:25.677     13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:24:25.677     13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:24:25.677     13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:24:25.677     13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@108 -- # echo mlx_0_1
00:24:25.677     13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@109 -- # continue 2
00:24:25.677    13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:24:25.677    13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0
00:24:25.677    13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:24:25.677    13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:24:25.677    13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # awk '{print $4}'
00:24:25.677    13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # cut -d/ -f1
00:24:25.677    13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:24:25.677    13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1
00:24:25.677    13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:24:25.677    13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:24:25.677    13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # awk '{print $4}'
00:24:25.677    13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # cut -d/ -f1
00:24:25.677   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8
00:24:25.677  192.168.100.9'
00:24:25.677    13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@485 -- # echo '192.168.100.8
00:24:25.677  192.168.100.9'
00:24:25.677    13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@485 -- # head -n 1
00:24:25.677   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8
00:24:25.677    13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@486 -- # echo '192.168.100.8
00:24:25.677  192.168.100.9'
00:24:25.677    13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@486 -- # tail -n +2
00:24:25.677    13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@486 -- # head -n 1
00:24:25.677   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9
00:24:25.677   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']'
00:24:25.677   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024'
00:24:25.677   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' rdma == tcp ']'
00:24:25.677   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' rdma == rdma ']'
00:24:25.677   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-rdma
00:24:25.677   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF
00:24:25.677   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:24:25.677   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable
00:24:25.677   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:24:25.677   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=3386283
00:24:25.677   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:24:25.677   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 3386283
00:24:25.677   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # '[' -z 3386283 ']'
00:24:25.677   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:24:25.677   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # local max_retries=100
00:24:25.677   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:24:25.677  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:24:25.677   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@844 -- # xtrace_disable
00:24:25.677   13:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:24:25.936  [2024-12-14 13:50:25.457813] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:24:25.936  [2024-12-14 13:50:25.457915] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:24:25.936  [2024-12-14 13:50:25.593775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:24:26.195  [2024-12-14 13:50:25.694323] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:24:26.195  [2024-12-14 13:50:25.694370] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:24:26.195  [2024-12-14 13:50:25.694382] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:24:26.195  [2024-12-14 13:50:25.694395] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:24:26.195  [2024-12-14 13:50:25.694405] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:24:26.195  [2024-12-14 13:50:25.697125] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:24:26.195  [2024-12-14 13:50:25.697197] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:24:26.196  [2024-12-14 13:50:25.697293] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:24:26.196  [2024-12-14 13:50:25.697301] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:24:26.764   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:24:26.764   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@868 -- # return 0
00:24:26.764   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:24:26.764   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@732 -- # xtrace_disable
00:24:26.764   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:24:26.764   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:24:26.764   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192
00:24:26.764   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:26.764   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:24:26.764  [2024-12-14 13:50:26.350074] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028540/0x7f2c9bb31940) succeed.
00:24:26.764  [2024-12-14 13:50:26.360146] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000286c0/0x7f2c9b1bd940) succeed.
00:24:27.023   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:27.023    13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11
00:24:27.023   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:24:27.023   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1
00:24:27.023   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:27.023   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:24:27.023  Malloc1
00:24:27.023   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:27.023   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1
00:24:27.024   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:27.024   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:24:27.024   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:27.024   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1
00:24:27.024   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:27.024   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:24:27.024   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:27.024   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420
00:24:27.024   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:27.024   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:24:27.024  [2024-12-14 13:50:26.720330] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 ***
00:24:27.024   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:27.024   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:24:27.024   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2
00:24:27.024   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:27.024   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:24:27.283  Malloc2
00:24:27.283   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:27.283   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2
00:24:27.283   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:27.283   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:24:27.283   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:27.284   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2
00:24:27.284   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:27.284   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:24:27.284   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:27.284   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420
00:24:27.284   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:27.284   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:24:27.284   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:27.284   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:24:27.284   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3
00:24:27.284   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:27.284   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:24:27.284  Malloc3
00:24:27.284   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:27.284   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3
00:24:27.284   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:27.284   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:24:27.284   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:27.284   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3
00:24:27.284   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:27.284   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:24:27.284   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:27.284   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420
00:24:27.284   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:27.284   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:24:27.284   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:27.284   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:24:27.284   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4
00:24:27.284   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:27.284   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:24:27.284  Malloc4
00:24:27.284   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:27.284   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4
00:24:27.284   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:27.284   13:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:24:27.284   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:27.284   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4
00:24:27.284   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:27.284   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:24:27.284   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:27.284   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420
00:24:27.284   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:27.284   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:24:27.543   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:27.543   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:24:27.543   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5
00:24:27.543   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:27.543   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:24:27.543  Malloc5
00:24:27.543   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:27.544   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5
00:24:27.544   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:27.544   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:24:27.544   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:27.544   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5
00:24:27.544   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:27.544   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:24:27.544   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:27.544   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420
00:24:27.544   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:27.544   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:24:27.544   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:27.544   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:24:27.544   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6
00:24:27.544   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:27.544   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:24:27.544  Malloc6
00:24:27.544   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:27.544   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6
00:24:27.544   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:27.544   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:24:27.544   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:27.544   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6
00:24:27.544   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:27.544   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:24:27.544   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:27.544   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t rdma -a 192.168.100.8 -s 4420
00:24:27.544   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:27.544   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:24:27.544   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:27.544   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:24:27.544   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7
00:24:27.544   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:27.544   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:24:27.803  Malloc7
00:24:27.803   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:27.803   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7
00:24:27.803   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:27.803   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:24:27.803   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:27.803   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7
00:24:27.803   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:27.803   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:24:27.803   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:27.803   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t rdma -a 192.168.100.8 -s 4420
00:24:27.803   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:27.803   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:24:27.804   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:27.804   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:24:27.804   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8
00:24:27.804   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:27.804   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:24:27.804  Malloc8
00:24:27.804   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:27.804   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8
00:24:27.804   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:27.804   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:24:27.804   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:27.804   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8
00:24:27.804   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:27.804   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:24:27.804   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:27.804   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t rdma -a 192.168.100.8 -s 4420
00:24:27.804   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:27.804   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:24:27.804   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:27.804   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:24:27.804   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9
00:24:27.804   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:27.804   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:24:27.804  Malloc9
00:24:27.804   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:27.804   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9
00:24:27.804   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:27.804   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:24:27.804   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:27.804   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9
00:24:27.804   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:27.804   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:24:27.804   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:27.804   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t rdma -a 192.168.100.8 -s 4420
00:24:27.804   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:27.804   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:24:27.804   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:27.804   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:24:27.804   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10
00:24:27.804   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:27.804   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:24:28.063  Malloc10
00:24:28.063   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:28.063   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10
00:24:28.063   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:28.063   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:24:28.063   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:28.063   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10
00:24:28.063   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:28.064   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:24:28.064   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:28.064   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t rdma -a 192.168.100.8 -s 4420
00:24:28.064   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:28.064   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:24:28.064   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:28.064   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:24:28.064   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11
00:24:28.064   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:28.064   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:24:28.064  Malloc11
00:24:28.064   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:28.064   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11
00:24:28.064   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:28.064   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:24:28.064   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:28.064   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11
00:24:28.064   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:28.064   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:24:28.064   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:28.064   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t rdma -a 192.168.100.8 -s 4420
00:24:28.064   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:28.064   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:24:28.064   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:28.064    13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11
00:24:28.064   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:24:28.064   13:50:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420
00:24:29.001   13:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1
00:24:29.001   13:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0
00:24:29.001   13:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:24:29.001   13:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:24:29.001   13:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2
00:24:31.536   13:50:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:24:31.536    13:50:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:24:31.536    13:50:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK1
00:24:31.536   13:50:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:24:31.536   13:50:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:24:31.536   13:50:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0
00:24:31.536   13:50:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:24:31.536   13:50:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420
00:24:32.104   13:50:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2
00:24:32.104   13:50:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0
00:24:32.104   13:50:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:24:32.104   13:50:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:24:32.104   13:50:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2
00:24:34.010   13:50:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:24:34.010    13:50:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:24:34.010    13:50:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK2
00:24:34.010   13:50:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:24:34.010   13:50:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:24:34.010   13:50:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0
00:24:34.010   13:50:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:24:34.010   13:50:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420
00:24:35.389   13:50:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3
00:24:35.389   13:50:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0
00:24:35.389   13:50:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:24:35.389   13:50:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:24:35.389   13:50:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2
00:24:37.294   13:50:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:24:37.294    13:50:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:24:37.294    13:50:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK3
00:24:37.294   13:50:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:24:37.294   13:50:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:24:37.294   13:50:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0
00:24:37.294   13:50:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:24:37.294   13:50:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420
00:24:38.231   13:50:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4
00:24:38.231   13:50:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0
00:24:38.231   13:50:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:24:38.231   13:50:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:24:38.231   13:50:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2
00:24:40.158   13:50:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:24:40.158    13:50:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:24:40.158    13:50:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK4
00:24:40.158   13:50:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:24:40.158   13:50:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:24:40.158   13:50:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0
00:24:40.158   13:50:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:24:40.158   13:50:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420
00:24:41.095   13:50:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5
00:24:41.095   13:50:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0
00:24:41.095   13:50:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:24:41.095   13:50:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:24:41.095   13:50:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2
00:24:43.000   13:50:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:24:43.259    13:50:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:24:43.259    13:50:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK5
00:24:43.259   13:50:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:24:43.259   13:50:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:24:43.259   13:50:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0
00:24:43.259   13:50:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:24:43.259   13:50:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode6 -a 192.168.100.8 -s 4420
00:24:44.195   13:50:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6
00:24:44.195   13:50:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0
00:24:44.195   13:50:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:24:44.195   13:50:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:24:44.195   13:50:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2
00:24:46.106   13:50:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:24:46.106    13:50:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:24:46.106    13:50:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK6
00:24:46.106   13:50:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:24:46.106   13:50:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:24:46.106   13:50:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0
00:24:46.106   13:50:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:24:46.106   13:50:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode7 -a 192.168.100.8 -s 4420
00:24:47.043   13:50:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7
00:24:47.044   13:50:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0
00:24:47.044   13:50:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:24:47.044   13:50:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:24:47.044   13:50:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2
00:24:49.579   13:50:48 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:24:49.579    13:50:48 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:24:49.579    13:50:48 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK7
00:24:49.579   13:50:48 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:24:49.579   13:50:48 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:24:49.579   13:50:48 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0
00:24:49.579   13:50:48 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:24:49.579   13:50:48 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode8 -a 192.168.100.8 -s 4420
00:24:50.147   13:50:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8
00:24:50.147   13:50:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0
00:24:50.147   13:50:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:24:50.147   13:50:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:24:50.147   13:50:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2
00:24:52.683   13:50:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:24:52.683    13:50:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:24:52.683    13:50:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK8
00:24:52.683   13:50:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:24:52.683   13:50:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:24:52.683   13:50:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0
00:24:52.683   13:50:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:24:52.683   13:50:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode9 -a 192.168.100.8 -s 4420
00:24:53.254   13:50:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9
00:24:53.255   13:50:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0
00:24:53.255   13:50:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:24:53.255   13:50:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:24:53.255   13:50:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2
00:24:55.223   13:50:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:24:55.223    13:50:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:24:55.223    13:50:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK9
00:24:55.223   13:50:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:24:55.223   13:50:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:24:55.223   13:50:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0
00:24:55.223   13:50:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:24:55.223   13:50:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode10 -a 192.168.100.8 -s 4420
00:24:56.160   13:50:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10
00:24:56.160   13:50:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0
00:24:56.160   13:50:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:24:56.160   13:50:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:24:56.160   13:50:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2
00:24:58.692   13:50:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:24:58.692    13:50:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:24:58.692    13:50:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK10
00:24:58.692   13:50:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:24:58.692   13:50:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:24:58.692   13:50:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0
00:24:58.692   13:50:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:24:58.692   13:50:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode11 -a 192.168.100.8 -s 4420
00:24:59.260   13:50:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11
00:24:59.260   13:50:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0
00:24:59.260   13:50:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:24:59.260   13:50:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:24:59.260   13:50:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2
00:25:01.166   13:51:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:25:01.166    13:51:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:25:01.166    13:51:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK11
00:25:01.166   13:51:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:25:01.166   13:51:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:25:01.166   13:51:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0
00:25:01.166   13:51:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10
00:25:01.425  [global]
00:25:01.425  thread=1
00:25:01.425  invalidate=1
00:25:01.425  rw=read
00:25:01.425  time_based=1
00:25:01.425  runtime=10
00:25:01.425  ioengine=libaio
00:25:01.425  direct=1
00:25:01.425  bs=262144
00:25:01.425  iodepth=64
00:25:01.426  norandommap=1
00:25:01.426  numjobs=1
00:25:01.426  
00:25:01.426  [job0]
00:25:01.426  filename=/dev/nvme0n1
00:25:01.426  [job1]
00:25:01.426  filename=/dev/nvme10n1
00:25:01.426  [job2]
00:25:01.426  filename=/dev/nvme1n1
00:25:01.426  [job3]
00:25:01.426  filename=/dev/nvme2n1
00:25:01.426  [job4]
00:25:01.426  filename=/dev/nvme3n1
00:25:01.426  [job5]
00:25:01.426  filename=/dev/nvme4n1
00:25:01.426  [job6]
00:25:01.426  filename=/dev/nvme5n1
00:25:01.426  [job7]
00:25:01.426  filename=/dev/nvme6n1
00:25:01.426  [job8]
00:25:01.426  filename=/dev/nvme7n1
00:25:01.426  [job9]
00:25:01.426  filename=/dev/nvme8n1
00:25:01.426  [job10]
00:25:01.426  filename=/dev/nvme9n1
00:25:01.426  Could not set queue depth (nvme0n1)
00:25:01.426  Could not set queue depth (nvme10n1)
00:25:01.426  Could not set queue depth (nvme1n1)
00:25:01.426  Could not set queue depth (nvme2n1)
00:25:01.426  Could not set queue depth (nvme3n1)
00:25:01.426  Could not set queue depth (nvme4n1)
00:25:01.426  Could not set queue depth (nvme5n1)
00:25:01.426  Could not set queue depth (nvme6n1)
00:25:01.426  Could not set queue depth (nvme7n1)
00:25:01.426  Could not set queue depth (nvme8n1)
00:25:01.426  Could not set queue depth (nvme9n1)
00:25:01.991  job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64
00:25:01.991  job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64
00:25:01.992  job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64
00:25:01.992  job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64
00:25:01.992  job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64
00:25:01.992  job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64
00:25:01.992  job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64
00:25:01.992  job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64
00:25:01.992  job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64
00:25:01.992  job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64
00:25:01.992  job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64
00:25:01.992  fio-3.35
00:25:01.992  Starting 11 threads
00:25:14.206  
00:25:14.206  job0: (groupid=0, jobs=1): err= 0: pid=3392868: Sat Dec 14 13:51:11 2024
00:25:14.206    read: IOPS=3334, BW=834MiB/s (874MB/s)(8351MiB/10017msec)
00:25:14.206      slat (usec): min=10, max=30792, avg=279.01, stdev=693.19
00:25:14.206      clat (usec): min=792, max=83132, avg=18893.63, stdev=6273.47
00:25:14.206       lat (usec): min=835, max=83175, avg=19172.64, stdev=6353.84
00:25:14.206      clat percentiles (usec):
00:25:14.206       |  1.00th=[ 4293],  5.00th=[15533], 10.00th=[16450], 20.00th=[16909],
00:25:14.207       | 30.00th=[17171], 40.00th=[17433], 50.00th=[17695], 60.00th=[17957],
00:25:14.207       | 70.00th=[18220], 80.00th=[18482], 90.00th=[24511], 95.00th=[34341],
00:25:14.207       | 99.00th=[45876], 99.50th=[49546], 99.90th=[53216], 99.95th=[53740],
00:25:14.207       | 99.99th=[78119]
00:25:14.207     bw (  KiB/s): min=474624, max=946688, per=24.10%, avg=853563.55, stdev=127546.55, samples=20
00:25:14.207     iops        : min= 1854, max= 3698, avg=3334.20, stdev=498.23, samples=20
00:25:14.207    lat (usec)   : 1000=0.02%
00:25:14.207    lat (msec)   : 2=0.22%, 4=0.67%, 10=2.27%, 20=85.69%, 50=10.72%
00:25:14.207    lat (msec)   : 100=0.41%
00:25:14.207    cpu          : usr=0.53%, sys=5.88%, ctx=8212, majf=0, minf=4097
00:25:14.207    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8%
00:25:14.207       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:25:14.207       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
00:25:14.207       issued rwts: total=33402,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:25:14.207       latency   : target=0, window=0, percentile=100.00%, depth=64
00:25:14.207  job1: (groupid=0, jobs=1): err= 0: pid=3392876: Sat Dec 14 13:51:11 2024
00:25:14.207    read: IOPS=867, BW=217MiB/s (227MB/s)(2181MiB/10059msec)
00:25:14.207      slat (usec): min=12, max=27478, avg=1113.17, stdev=3151.64
00:25:14.207      clat (msec): min=13, max=136, avg=72.59, stdev=19.56
00:25:14.207       lat (msec): min=13, max=136, avg=73.71, stdev=20.06
00:25:14.207      clat percentiles (msec):
00:25:14.207       |  1.00th=[   27],  5.00th=[   35], 10.00th=[   36], 20.00th=[   70],
00:25:14.207       | 30.00th=[   71], 40.00th=[   72], 50.00th=[   73], 60.00th=[   74],
00:25:14.207       | 70.00th=[   77], 80.00th=[   86], 90.00th=[   95], 95.00th=[  110],
00:25:14.207       | 99.00th=[  114], 99.50th=[  116], 99.90th=[  131], 99.95th=[  133],
00:25:14.207       | 99.99th=[  138]
00:25:14.207     bw (  KiB/s): min=148992, max=468480, per=6.26%, avg=221747.20, stdev=66972.24, samples=20
00:25:14.207     iops        : min=  582, max= 1830, avg=866.20, stdev=261.61, samples=20
00:25:14.207    lat (msec)   : 20=0.34%, 50=13.66%, 100=78.36%, 250=7.63%
00:25:14.207    cpu          : usr=0.40%, sys=4.16%, ctx=1844, majf=0, minf=4097
00:25:14.207    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3%
00:25:14.207       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:25:14.207       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
00:25:14.207       issued rwts: total=8725,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:25:14.207       latency   : target=0, window=0, percentile=100.00%, depth=64
00:25:14.207  job2: (groupid=0, jobs=1): err= 0: pid=3392877: Sat Dec 14 13:51:11 2024
00:25:14.207    read: IOPS=907, BW=227MiB/s (238MB/s)(2283MiB/10060msec)
00:25:14.207      slat (usec): min=11, max=65309, avg=1057.49, stdev=4278.85
00:25:14.207      clat (msec): min=13, max=173, avg=69.37, stdev=24.66
00:25:14.207       lat (msec): min=14, max=175, avg=70.42, stdev=25.34
00:25:14.207      clat percentiles (msec):
00:25:14.207       |  1.00th=[   17],  5.00th=[   19], 10.00th=[   26], 20.00th=[   63],
00:25:14.207       | 30.00th=[   72], 40.00th=[   73], 50.00th=[   74], 60.00th=[   75],
00:25:14.207       | 70.00th=[   77], 80.00th=[   86], 90.00th=[   94], 95.00th=[  110],
00:25:14.207       | 99.00th=[  115], 99.50th=[  116], 99.90th=[  136], 99.95th=[  140],
00:25:14.207       | 99.99th=[  174]
00:25:14.207     bw (  KiB/s): min=134656, max=624640, per=6.56%, avg=232209.60, stdev=103344.52, samples=20
00:25:14.207     iops        : min=  526, max= 2440, avg=907.05, stdev=403.70, samples=20
00:25:14.207    lat (msec)   : 20=7.43%, 50=12.32%, 100=72.42%, 250=7.83%
00:25:14.207    cpu          : usr=0.40%, sys=4.01%, ctx=1935, majf=0, minf=4097
00:25:14.207    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3%
00:25:14.207       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:25:14.207       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
00:25:14.207       issued rwts: total=9133,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:25:14.207       latency   : target=0, window=0, percentile=100.00%, depth=64
00:25:14.207  job3: (groupid=0, jobs=1): err= 0: pid=3392878: Sat Dec 14 13:51:11 2024
00:25:14.207    read: IOPS=791, BW=198MiB/s (208MB/s)(1991MiB/10058msec)
00:25:14.207      slat (usec): min=18, max=26311, avg=1253.16, stdev=3276.01
00:25:14.207      clat (msec): min=16, max=134, avg=79.50, stdev=12.24
00:25:14.207       lat (msec): min=16, max=134, avg=80.75, stdev=12.74
00:25:14.207      clat percentiles (msec):
00:25:14.207       |  1.00th=[   68],  5.00th=[   71], 10.00th=[   72], 20.00th=[   73],
00:25:14.207       | 30.00th=[   73], 40.00th=[   74], 50.00th=[   74], 60.00th=[   75],
00:25:14.207       | 70.00th=[   79], 80.00th=[   89], 90.00th=[   97], 95.00th=[  111],
00:25:14.207       | 99.00th=[  115], 99.50th=[  122], 99.90th=[  129], 99.95th=[  131],
00:25:14.207       | 99.99th=[  136]
00:25:14.207     bw (  KiB/s): min=148480, max=223232, per=5.71%, avg=202209.95, stdev=24657.37, samples=20
00:25:14.207     iops        : min=  580, max=  872, avg=789.85, stdev=96.30, samples=20
00:25:14.207    lat (msec)   : 20=0.16%, 50=0.31%, 100=91.02%, 250=8.50%
00:25:14.207    cpu          : usr=0.46%, sys=3.81%, ctx=1490, majf=0, minf=4097
00:25:14.207    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2%
00:25:14.207       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:25:14.207       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
00:25:14.207       issued rwts: total=7962,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:25:14.207       latency   : target=0, window=0, percentile=100.00%, depth=64
00:25:14.207  job4: (groupid=0, jobs=1): err= 0: pid=3392882: Sat Dec 14 13:51:11 2024
00:25:14.207    read: IOPS=937, BW=234MiB/s (246MB/s)(2357MiB/10058msec)
00:25:14.207      slat (usec): min=16, max=24186, avg=1056.47, stdev=2614.76
00:25:14.207      clat (msec): min=16, max=127, avg=67.15, stdev=11.90
00:25:14.207       lat (msec): min=17, max=139, avg=68.20, stdev=12.27
00:25:14.207      clat percentiles (msec):
00:25:14.207       |  1.00th=[   51],  5.00th=[   53], 10.00th=[   53], 20.00th=[   54],
00:25:14.207       | 30.00th=[   56], 40.00th=[   69], 50.00th=[   71], 60.00th=[   72],
00:25:14.207       | 70.00th=[   73], 80.00th=[   74], 90.00th=[   80], 95.00th=[   89],
00:25:14.207       | 99.00th=[   96], 99.50th=[  102], 99.90th=[  122], 99.95th=[  127],
00:25:14.207       | 99.99th=[  129]
00:25:14.207     bw (  KiB/s): min=170837, max=310272, per=6.77%, avg=239709.85, stdev=39669.25, samples=20
00:25:14.207     iops        : min=  667, max= 1212, avg=936.35, stdev=154.99, samples=20
00:25:14.207    lat (msec)   : 20=0.11%, 50=1.05%, 100=98.28%, 250=0.56%
00:25:14.207    cpu          : usr=0.41%, sys=4.55%, ctx=1799, majf=0, minf=4097
00:25:14.207    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3%
00:25:14.207       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:25:14.207       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
00:25:14.207       issued rwts: total=9426,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:25:14.207       latency   : target=0, window=0, percentile=100.00%, depth=64
00:25:14.207  job5: (groupid=0, jobs=1): err= 0: pid=3392883: Sat Dec 14 13:51:11 2024
00:25:14.207    read: IOPS=937, BW=234MiB/s (246MB/s)(2359MiB/10061msec)
00:25:14.207      slat (usec): min=15, max=25712, avg=1055.16, stdev=2762.34
00:25:14.207      clat (msec): min=13, max=130, avg=67.10, stdev=12.09
00:25:14.207       lat (msec): min=14, max=130, avg=68.16, stdev=12.48
00:25:14.207      clat percentiles (msec):
00:25:14.207       |  1.00th=[   50],  5.00th=[   53], 10.00th=[   53], 20.00th=[   55],
00:25:14.207       | 30.00th=[   56], 40.00th=[   69], 50.00th=[   71], 60.00th=[   72],
00:25:14.207       | 70.00th=[   73], 80.00th=[   74], 90.00th=[   81], 95.00th=[   89],
00:25:14.207       | 99.00th=[   97], 99.50th=[  102], 99.90th=[  127], 99.95th=[  127],
00:25:14.207       | 99.99th=[  131]
00:25:14.207     bw (  KiB/s): min=175967, max=302592, per=6.78%, avg=239966.35, stdev=38982.27, samples=20
00:25:14.207     iops        : min=  687, max= 1182, avg=937.35, stdev=152.31, samples=20
00:25:14.207    lat (msec)   : 20=0.19%, 50=1.03%, 100=98.27%, 250=0.51%
00:25:14.207    cpu          : usr=0.40%, sys=4.61%, ctx=1792, majf=0, minf=4097
00:25:14.207    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3%
00:25:14.207       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:25:14.207       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
00:25:14.207       issued rwts: total=9436,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:25:14.207       latency   : target=0, window=0, percentile=100.00%, depth=64
00:25:14.207  job6: (groupid=0, jobs=1): err= 0: pid=3392887: Sat Dec 14 13:51:11 2024
00:25:14.207    read: IOPS=791, BW=198MiB/s (208MB/s)(1991MiB/10060msec)
00:25:14.207      slat (usec): min=13, max=37953, avg=1252.56, stdev=4046.15
00:25:14.207      clat (msec): min=12, max=138, avg=79.52, stdev=12.75
00:25:14.207       lat (msec): min=12, max=139, avg=80.77, stdev=13.45
00:25:14.207      clat percentiles (msec):
00:25:14.207       |  1.00th=[   66],  5.00th=[   71], 10.00th=[   72], 20.00th=[   73],
00:25:14.207       | 30.00th=[   73], 40.00th=[   74], 50.00th=[   74], 60.00th=[   75],
00:25:14.207       | 70.00th=[   79], 80.00th=[   89], 90.00th=[   99], 95.00th=[  110],
00:25:14.207       | 99.00th=[  115], 99.50th=[  118], 99.90th=[  133], 99.95th=[  136],
00:25:14.207       | 99.99th=[  138]
00:25:14.207     bw (  KiB/s): min=142336, max=224256, per=5.71%, avg=202236.75, stdev=25399.63, samples=20
00:25:14.207     iops        : min=  556, max=  876, avg=789.95, stdev=99.18, samples=20
00:25:14.207    lat (msec)   : 20=0.34%, 50=0.38%, 100=89.84%, 250=9.44%
00:25:14.207    cpu          : usr=0.45%, sys=3.85%, ctx=1507, majf=0, minf=4097
00:25:14.207    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2%
00:25:14.207       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:25:14.207       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
00:25:14.207       issued rwts: total=7963,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:25:14.207       latency   : target=0, window=0, percentile=100.00%, depth=64
00:25:14.207  job7: (groupid=0, jobs=1): err= 0: pid=3392888: Sat Dec 14 13:51:11 2024
00:25:14.207    read: IOPS=793, BW=198MiB/s (208MB/s)(1996MiB/10061msec)
00:25:14.207      slat (usec): min=18, max=45645, avg=1247.53, stdev=3658.48
00:25:14.207      clat (msec): min=8, max=155, avg=79.33, stdev=13.45
00:25:14.207       lat (msec): min=8, max=158, avg=80.58, stdev=14.02
00:25:14.207      clat percentiles (msec):
00:25:14.207       |  1.00th=[   45],  5.00th=[   71], 10.00th=[   72], 20.00th=[   73],
00:25:14.207       | 30.00th=[   73], 40.00th=[   74], 50.00th=[   74], 60.00th=[   75],
00:25:14.207       | 70.00th=[   79], 80.00th=[   89], 90.00th=[   97], 95.00th=[  111],
00:25:14.207       | 99.00th=[  117], 99.50th=[  126], 99.90th=[  155], 99.95th=[  155],
00:25:14.207       | 99.99th=[  157]
00:25:14.207     bw (  KiB/s): min=145920, max=231424, per=5.73%, avg=202803.20, stdev=25394.86, samples=20
00:25:14.207     iops        : min=  570, max=  904, avg=792.20, stdev=99.20, samples=20
00:25:14.207    lat (msec)   : 10=0.14%, 20=0.36%, 50=0.53%, 100=90.43%, 250=8.54%
00:25:14.207    cpu          : usr=0.38%, sys=3.88%, ctx=1496, majf=0, minf=4097
00:25:14.207    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2%
00:25:14.207       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:25:14.207       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
00:25:14.207       issued rwts: total=7985,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:25:14.207       latency   : target=0, window=0, percentile=100.00%, depth=64
00:25:14.207  job8: (groupid=0, jobs=1): err= 0: pid=3392889: Sat Dec 14 13:51:11 2024
00:25:14.208    read: IOPS=938, BW=235MiB/s (246MB/s)(2360MiB/10060msec)
00:25:14.208      slat (usec): min=16, max=22165, avg=1056.25, stdev=2745.84
00:25:14.208      clat (msec): min=13, max=117, avg=67.07, stdev=11.95
00:25:14.208       lat (msec): min=13, max=133, avg=68.13, stdev=12.34
00:25:14.208      clat percentiles (msec):
00:25:14.208       |  1.00th=[   51],  5.00th=[   53], 10.00th=[   53], 20.00th=[   54],
00:25:14.208       | 30.00th=[   56], 40.00th=[   69], 50.00th=[   71], 60.00th=[   72],
00:25:14.208       | 70.00th=[   73], 80.00th=[   74], 90.00th=[   81], 95.00th=[   89],
00:25:14.208       | 99.00th=[   96], 99.50th=[  100], 99.90th=[  113], 99.95th=[  118],
00:25:14.208       | 99.99th=[  118]
00:25:14.208     bw (  KiB/s): min=175967, max=303616, per=6.78%, avg=240043.15, stdev=39127.90, samples=20
00:25:14.208     iops        : min=  687, max= 1186, avg=937.65, stdev=152.88, samples=20
00:25:14.208    lat (msec)   : 20=0.21%, 50=0.89%, 100=98.54%, 250=0.36%
00:25:14.208    cpu          : usr=0.52%, sys=4.55%, ctx=1775, majf=0, minf=4097
00:25:14.208    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3%
00:25:14.208       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:25:14.208       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
00:25:14.208       issued rwts: total=9440,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:25:14.208       latency   : target=0, window=0, percentile=100.00%, depth=64
00:25:14.208  job9: (groupid=0, jobs=1): err= 0: pid=3392894: Sat Dec 14 13:51:11 2024
00:25:14.208    read: IOPS=1425, BW=356MiB/s (374MB/s)(3576MiB/10033msec)
00:25:14.208      slat (usec): min=13, max=56803, avg=686.00, stdev=2283.80
00:25:14.208      clat (msec): min=3, max=159, avg=44.17, stdev=21.10
00:25:14.208       lat (msec): min=3, max=167, avg=44.85, stdev=21.50
00:25:14.208      clat percentiles (msec):
00:25:14.208       |  1.00th=[   33],  5.00th=[   34], 10.00th=[   34], 20.00th=[   35],
00:25:14.208       | 30.00th=[   36], 40.00th=[   36], 50.00th=[   36], 60.00th=[   37],
00:25:14.208       | 70.00th=[   38], 80.00th=[   40], 90.00th=[   90], 95.00th=[   97],
00:25:14.208       | 99.00th=[  113], 99.50th=[  114], 99.90th=[  128], 99.95th=[  131],
00:25:14.208       | 99.99th=[  132]
00:25:14.208     bw (  KiB/s): min=136192, max=459776, per=10.29%, avg=364569.60, stdev=126767.36, samples=20
00:25:14.208     iops        : min=  532, max= 1796, avg=1424.10, stdev=495.19, samples=20
00:25:14.208    lat (msec)   : 4=0.03%, 10=0.15%, 20=0.28%, 50=84.77%, 100=10.53%
00:25:14.208    lat (msec)   : 250=4.24%
00:25:14.208    cpu          : usr=0.68%, sys=6.29%, ctx=2645, majf=0, minf=3659
00:25:14.208    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6%
00:25:14.208       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:25:14.208       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
00:25:14.208       issued rwts: total=14304,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:25:14.208       latency   : target=0, window=0, percentile=100.00%, depth=64
00:25:14.208  job10: (groupid=0, jobs=1): err= 0: pid=3392895: Sat Dec 14 13:51:11 2024
00:25:14.208    read: IOPS=2134, BW=534MiB/s (560MB/s)(5354MiB/10031msec)
00:25:14.208      slat (usec): min=11, max=21593, avg=463.93, stdev=1190.31
00:25:14.208      clat (usec): min=10066, max=66490, avg=29482.37, stdev=9365.33
00:25:14.208       lat (usec): min=10513, max=66542, avg=29946.30, stdev=9535.28
00:25:14.208      clat percentiles (usec):
00:25:14.208       |  1.00th=[15533],  5.00th=[16450], 10.00th=[17433], 20.00th=[17957],
00:25:14.208       | 30.00th=[18744], 40.00th=[32900], 50.00th=[34341], 60.00th=[35390],
00:25:14.208       | 70.00th=[35914], 80.00th=[36439], 90.00th=[37487], 95.00th=[39584],
00:25:14.208       | 99.00th=[51643], 99.50th=[53216], 99.90th=[58459], 99.95th=[61080],
00:25:14.208       | 99.99th=[66323]
00:25:14.208     bw (  KiB/s): min=383488, max=912384, per=15.43%, avg=546611.20, stdev=168163.21, samples=20
00:25:14.208     iops        : min= 1498, max= 3564, avg=2135.20, stdev=656.89, samples=20
00:25:14.208    lat (msec)   : 20=34.22%, 50=64.40%, 100=1.38%
00:25:14.208    cpu          : usr=0.70%, sys=7.35%, ctx=3786, majf=0, minf=4097
00:25:14.208    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7%
00:25:14.208       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:25:14.208       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
00:25:14.208       issued rwts: total=21415,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:25:14.208       latency   : target=0, window=0, percentile=100.00%, depth=64
00:25:14.208  
00:25:14.208  Run status group 0 (all jobs):
00:25:14.208     READ: bw=3459MiB/s (3627MB/s), 198MiB/s-834MiB/s (208MB/s-874MB/s), io=34.0GiB (36.5GB), run=10017-10061msec
00:25:14.208  
00:25:14.208  Disk stats (read/write):
00:25:14.208    nvme0n1: ios=65869/0, merge=0/0, ticks=1215717/0, in_queue=1215717, util=96.75%
00:25:14.208    nvme10n1: ios=17113/0, merge=0/0, ticks=1222021/0, in_queue=1222021, util=96.98%
00:25:14.208    nvme1n1: ios=17935/0, merge=0/0, ticks=1221901/0, in_queue=1221901, util=97.32%
00:25:14.208    nvme2n1: ios=15592/0, merge=0/0, ticks=1221072/0, in_queue=1221072, util=97.49%
00:25:14.208    nvme3n1: ios=18542/0, merge=0/0, ticks=1220937/0, in_queue=1220937, util=97.59%
00:25:14.208    nvme4n1: ios=18563/0, merge=0/0, ticks=1221600/0, in_queue=1221600, util=98.03%
00:25:14.208    nvme5n1: ios=15625/0, merge=0/0, ticks=1221942/0, in_queue=1221942, util=98.22%
00:25:14.208    nvme6n1: ios=15676/0, merge=0/0, ticks=1222381/0, in_queue=1222381, util=98.39%
00:25:14.208    nvme7n1: ios=18544/0, merge=0/0, ticks=1219857/0, in_queue=1219857, util=98.86%
00:25:14.208    nvme8n1: ios=28093/0, merge=0/0, ticks=1223076/0, in_queue=1223076, util=99.12%
00:25:14.208    nvme9n1: ios=42301/0, merge=0/0, ticks=1220242/0, in_queue=1220242, util=99.25%
00:25:14.208   13:51:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10
00:25:14.208  [global]
00:25:14.208  thread=1
00:25:14.208  invalidate=1
00:25:14.208  rw=randwrite
00:25:14.208  time_based=1
00:25:14.208  runtime=10
00:25:14.208  ioengine=libaio
00:25:14.208  direct=1
00:25:14.208  bs=262144
00:25:14.208  iodepth=64
00:25:14.208  norandommap=1
00:25:14.208  numjobs=1
00:25:14.208  
00:25:14.208  [job0]
00:25:14.208  filename=/dev/nvme0n1
00:25:14.208  [job1]
00:25:14.208  filename=/dev/nvme10n1
00:25:14.208  [job2]
00:25:14.208  filename=/dev/nvme1n1
00:25:14.208  [job3]
00:25:14.208  filename=/dev/nvme2n1
00:25:14.208  [job4]
00:25:14.208  filename=/dev/nvme3n1
00:25:14.208  [job5]
00:25:14.208  filename=/dev/nvme4n1
00:25:14.208  [job6]
00:25:14.208  filename=/dev/nvme5n1
00:25:14.208  [job7]
00:25:14.208  filename=/dev/nvme6n1
00:25:14.208  [job8]
00:25:14.208  filename=/dev/nvme7n1
00:25:14.208  [job9]
00:25:14.208  filename=/dev/nvme8n1
00:25:14.208  [job10]
00:25:14.208  filename=/dev/nvme9n1
00:25:14.208  Could not set queue depth (nvme0n1)
00:25:14.208  Could not set queue depth (nvme10n1)
00:25:14.208  Could not set queue depth (nvme1n1)
00:25:14.208  Could not set queue depth (nvme2n1)
00:25:14.208  Could not set queue depth (nvme3n1)
00:25:14.208  Could not set queue depth (nvme4n1)
00:25:14.208  Could not set queue depth (nvme5n1)
00:25:14.208  Could not set queue depth (nvme6n1)
00:25:14.208  Could not set queue depth (nvme7n1)
00:25:14.208  Could not set queue depth (nvme8n1)
00:25:14.208  Could not set queue depth (nvme9n1)
00:25:14.208  job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64
00:25:14.208  job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64
00:25:14.208  job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64
00:25:14.208  job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64
00:25:14.208  job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64
00:25:14.208  job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64
00:25:14.208  job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64
00:25:14.208  job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64
00:25:14.208  job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64
00:25:14.208  job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64
00:25:14.208  job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64
00:25:14.208  fio-3.35
00:25:14.208  Starting 11 threads
00:25:24.193  
00:25:24.193  job0: (groupid=0, jobs=1): err= 0: pid=3395054: Sat Dec 14 13:51:22 2024
00:25:24.193    write: IOPS=1033, BW=258MiB/s (271MB/s)(2595MiB/10049msec); 0 zone resets
00:25:24.193      slat (usec): min=23, max=17387, avg=937.75, stdev=1727.15
00:25:24.193      clat (msec): min=3, max=102, avg=60.99, stdev= 5.17
00:25:24.193       lat (msec): min=3, max=102, avg=61.93, stdev= 5.28
00:25:24.193      clat percentiles (msec):
00:25:24.193       |  1.00th=[   44],  5.00th=[   57], 10.00th=[   58], 20.00th=[   59],
00:25:24.193       | 30.00th=[   60], 40.00th=[   61], 50.00th=[   62], 60.00th=[   62],
00:25:24.193       | 70.00th=[   63], 80.00th=[   64], 90.00th=[   65], 95.00th=[   66],
00:25:24.193       | 99.00th=[   80], 99.50th=[   88], 99.90th=[  100], 99.95th=[  102],
00:25:24.193       | 99.99th=[  103]
00:25:24.193     bw (  KiB/s): min=238592, max=271872, per=8.35%, avg=264140.80, stdev=7730.85, samples=20
00:25:24.193     iops        : min=  932, max= 1062, avg=1031.80, stdev=30.20, samples=20
00:25:24.193    lat (msec)   : 4=0.03%, 10=0.08%, 20=0.08%, 50=1.49%, 100=98.24%
00:25:24.193    lat (msec)   : 250=0.09%
00:25:24.193    cpu          : usr=2.41%, sys=4.47%, ctx=2607, majf=0, minf=1
00:25:24.193    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4%
00:25:24.193       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:25:24.193       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
00:25:24.193       issued rwts: total=0,10381,0,0 short=0,0,0,0 dropped=0,0,0,0
00:25:24.193       latency   : target=0, window=0, percentile=100.00%, depth=64
00:25:24.193  job1: (groupid=0, jobs=1): err= 0: pid=3395067: Sat Dec 14 13:51:22 2024
00:25:24.193    write: IOPS=795, BW=199MiB/s (209MB/s)(1999MiB/10048msec); 0 zone resets
00:25:24.193      slat (usec): min=26, max=31184, avg=1235.40, stdev=2814.57
00:25:24.193      clat (msec): min=10, max=128, avg=79.17, stdev=23.18
00:25:24.193       lat (msec): min=12, max=131, avg=80.40, stdev=23.64
00:25:24.193      clat percentiles (msec):
00:25:24.193       |  1.00th=[   24],  5.00th=[   40], 10.00th=[   41], 20.00th=[   44],
00:25:24.193       | 30.00th=[   79], 40.00th=[   80], 50.00th=[   82], 60.00th=[   88],
00:25:24.193       | 70.00th=[   99], 80.00th=[  100], 90.00th=[  103], 95.00th=[  105],
00:25:24.193       | 99.00th=[  112], 99.50th=[  117], 99.90th=[  125], 99.95th=[  126],
00:25:24.193       | 99.99th=[  129]
00:25:24.193     bw (  KiB/s): min=158208, max=397312, per=6.42%, avg=203037.75, stdev=66318.48, samples=20
00:25:24.193     iops        : min=  618, max= 1552, avg=793.10, stdev=259.05, samples=20
00:25:24.193    lat (msec)   : 20=0.46%, 50=20.33%, 100=61.90%, 250=17.31%
00:25:24.193    cpu          : usr=1.75%, sys=3.42%, ctx=1972, majf=0, minf=1
00:25:24.193    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2%
00:25:24.193       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:25:24.193       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
00:25:24.193       issued rwts: total=0,7995,0,0 short=0,0,0,0 dropped=0,0,0,0
00:25:24.193       latency   : target=0, window=0, percentile=100.00%, depth=64
00:25:24.193  job2: (groupid=0, jobs=1): err= 0: pid=3395068: Sat Dec 14 13:51:22 2024
00:25:24.193    write: IOPS=1260, BW=315MiB/s (330MB/s)(3162MiB/10032msec); 0 zone resets
00:25:24.193      slat (usec): min=25, max=15567, avg=786.18, stdev=1425.22
00:25:24.193      clat (usec): min=20207, max=90139, avg=49960.36, stdev=10700.38
00:25:24.193       lat (usec): min=20266, max=90210, avg=50746.53, stdev=10830.25
00:25:24.193      clat percentiles (usec):
00:25:24.193       |  1.00th=[37487],  5.00th=[39060], 10.00th=[39584], 20.00th=[40633],
00:25:24.193       | 30.00th=[41681], 40.00th=[42206], 50.00th=[42730], 60.00th=[56886],
00:25:24.193       | 70.00th=[60556], 80.00th=[62653], 90.00th=[63701], 95.00th=[65274],
00:25:24.193       | 99.00th=[68682], 99.50th=[77071], 99.90th=[82314], 99.95th=[82314],
00:25:24.193       | 99.99th=[89654]
00:25:24.193     bw (  KiB/s): min=233939, max=396288, per=10.18%, avg=322173.75, stdev=65305.70, samples=20
00:25:24.193     iops        : min=  913, max= 1548, avg=1258.45, stdev=255.16, samples=20
00:25:24.193    lat (msec)   : 50=59.20%, 100=40.80%
00:25:24.193    cpu          : usr=2.97%, sys=4.94%, ctx=3100, majf=0, minf=1
00:25:24.193    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5%
00:25:24.193       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:25:24.193       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
00:25:24.193       issued rwts: total=0,12647,0,0 short=0,0,0,0 dropped=0,0,0,0
00:25:24.193       latency   : target=0, window=0, percentile=100.00%, depth=64
00:25:24.193  job3: (groupid=0, jobs=1): err= 0: pid=3395069: Sat Dec 14 13:51:22 2024
00:25:24.193    write: IOPS=1296, BW=324MiB/s (340MB/s)(3253MiB/10033msec); 0 zone resets
00:25:24.193      slat (usec): min=26, max=8657, avg=764.24, stdev=1421.79
00:25:24.193      clat (usec): min=7635, max=71262, avg=48573.98, stdev=9511.98
00:25:24.193       lat (usec): min=7687, max=71307, avg=49338.22, stdev=9634.11
00:25:24.193      clat percentiles (usec):
00:25:24.193       |  1.00th=[37487],  5.00th=[38536], 10.00th=[39584], 20.00th=[40633],
00:25:24.193       | 30.00th=[41157], 40.00th=[41681], 50.00th=[42730], 60.00th=[50594],
00:25:24.193       | 70.00th=[57934], 80.00th=[59507], 90.00th=[61604], 95.00th=[62653],
00:25:24.193       | 99.00th=[64750], 99.50th=[65799], 99.90th=[67634], 99.95th=[68682],
00:25:24.193       | 99.99th=[70779]
00:25:24.193     bw (  KiB/s): min=263680, max=395264, per=10.48%, avg=331481.60, stdev=59702.67, samples=20
00:25:24.193     iops        : min= 1030, max= 1544, avg=1294.85, stdev=233.21, samples=20
00:25:24.193    lat (msec)   : 10=0.03%, 20=0.10%, 50=59.78%, 100=40.08%
00:25:24.193    cpu          : usr=3.01%, sys=4.87%, ctx=3211, majf=0, minf=1
00:25:24.193    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5%
00:25:24.193       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:25:24.193       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
00:25:24.193       issued rwts: total=0,13010,0,0 short=0,0,0,0 dropped=0,0,0,0
00:25:24.193       latency   : target=0, window=0, percentile=100.00%, depth=64
00:25:24.193  job4: (groupid=0, jobs=1): err= 0: pid=3395070: Sat Dec 14 13:51:22 2024
00:25:24.193    write: IOPS=705, BW=176MiB/s (185MB/s)(1776MiB/10064msec); 0 zone resets
00:25:24.193      slat (usec): min=28, max=24849, avg=1392.75, stdev=3023.04
00:25:24.193      clat (msec): min=22, max=139, avg=89.25, stdev=11.28
00:25:24.193       lat (msec): min=22, max=140, avg=90.64, stdev=11.65
00:25:24.193      clat percentiles (msec):
00:25:24.193       |  1.00th=[   70],  5.00th=[   78], 10.00th=[   79], 20.00th=[   80],
00:25:24.193       | 30.00th=[   81], 40.00th=[   83], 50.00th=[   86], 60.00th=[   97],
00:25:24.193       | 70.00th=[   99], 80.00th=[  101], 90.00th=[  103], 95.00th=[  106],
00:25:24.193       | 99.00th=[  114], 99.50th=[  118], 99.90th=[  132], 99.95th=[  140],
00:25:24.193       | 99.99th=[  140]
00:25:24.193     bw (  KiB/s): min=157696, max=202240, per=5.70%, avg=180204.40, stdev=20085.84, samples=20
00:25:24.193     iops        : min=  616, max=  790, avg=703.90, stdev=78.44, samples=20
00:25:24.193    lat (msec)   : 50=0.31%, 100=79.94%, 250=19.75%
00:25:24.193    cpu          : usr=1.61%, sys=3.22%, ctx=1756, majf=0, minf=1
00:25:24.193    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1%
00:25:24.193       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:25:24.193       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
00:25:24.193       issued rwts: total=0,7103,0,0 short=0,0,0,0 dropped=0,0,0,0
00:25:24.193       latency   : target=0, window=0, percentile=100.00%, depth=64
00:25:24.193  job5: (groupid=0, jobs=1): err= 0: pid=3395072: Sat Dec 14 13:51:22 2024
00:25:24.193    write: IOPS=2307, BW=577MiB/s (605MB/s)(5796MiB/10048msec); 0 zone resets
00:25:24.193      slat (usec): min=18, max=6742, avg=429.15, stdev=847.68
00:25:24.193      clat (msec): min=4, max=101, avg=27.30, stdev=10.05
00:25:24.193       lat (msec): min=5, max=101, avg=27.73, stdev=10.19
00:25:24.193      clat percentiles (usec):
00:25:24.193       |  1.00th=[18744],  5.00th=[19006], 10.00th=[19268], 20.00th=[19792],
00:25:24.193       | 30.00th=[20317], 40.00th=[20579], 50.00th=[20841], 60.00th=[21365],
00:25:24.193       | 70.00th=[38536], 80.00th=[40633], 90.00th=[41681], 95.00th=[42206],
00:25:24.193       | 99.00th=[43779], 99.50th=[45876], 99.90th=[81265], 99.95th=[92799],
00:25:24.193       | 99.99th=[98042]
00:25:24.193     bw (  KiB/s): min=374272, max=800768, per=18.71%, avg=591911.00, stdev=202638.83, samples=20
00:25:24.193     iops        : min= 1462, max= 3128, avg=2312.15, stdev=791.56, samples=20
00:25:24.193    lat (msec)   : 10=0.02%, 20=23.66%, 50=75.93%, 100=0.38%, 250=0.01%
00:25:24.193    cpu          : usr=3.57%, sys=5.86%, ctx=4897, majf=0, minf=1
00:25:24.193    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7%
00:25:24.193       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:25:24.193       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
00:25:24.193       issued rwts: total=0,23183,0,0 short=0,0,0,0 dropped=0,0,0,0
00:25:24.193       latency   : target=0, window=0, percentile=100.00%, depth=64
00:25:24.193  job6: (groupid=0, jobs=1): err= 0: pid=3395076: Sat Dec 14 13:51:22 2024
00:25:24.194    write: IOPS=1006, BW=252MiB/s (264MB/s)(2534MiB/10065msec); 0 zone resets
00:25:24.194      slat (usec): min=27, max=45080, avg=968.77, stdev=2647.75
00:25:24.194      clat (msec): min=4, max=146, avg=62.56, stdev=27.96
00:25:24.194       lat (msec): min=4, max=146, avg=63.53, stdev=28.44
00:25:24.194      clat percentiles (msec):
00:25:24.194       |  1.00th=[   37],  5.00th=[   39], 10.00th=[   40], 20.00th=[   41],
00:25:24.194       | 30.00th=[   41], 40.00th=[   42], 50.00th=[   43], 60.00th=[   44],
00:25:24.194       | 70.00th=[   96], 80.00th=[  100], 90.00th=[  102], 95.00th=[  104],
00:25:24.194       | 99.00th=[  109], 99.50th=[  123], 99.90th=[  134], 99.95th=[  138],
00:25:24.194       | 99.99th=[  140]
00:25:24.194     bw (  KiB/s): min=154624, max=401408, per=8.15%, avg=257882.15, stdev=114330.10, samples=20
00:25:24.194     iops        : min=  604, max= 1568, avg=1007.35, stdev=446.60, samples=20
00:25:24.194    lat (msec)   : 10=0.01%, 20=0.18%, 50=60.48%, 100=26.13%, 250=13.20%
00:25:24.194    cpu          : usr=2.56%, sys=3.60%, ctx=2489, majf=0, minf=1
00:25:24.194    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4%
00:25:24.194       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:25:24.194       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
00:25:24.194       issued rwts: total=0,10135,0,0 short=0,0,0,0 dropped=0,0,0,0
00:25:24.194       latency   : target=0, window=0, percentile=100.00%, depth=64
00:25:24.194  job7: (groupid=0, jobs=1): err= 0: pid=3395077: Sat Dec 14 13:51:22 2024
00:25:24.194    write: IOPS=1260, BW=315MiB/s (331MB/s)(3162MiB/10032msec); 0 zone resets
00:25:24.194      slat (usec): min=26, max=15387, avg=785.79, stdev=1413.27
00:25:24.194      clat (usec): min=20033, max=91532, avg=49952.99, stdev=10673.69
00:25:24.194       lat (usec): min=20096, max=91601, avg=50738.79, stdev=10793.92
00:25:24.194      clat percentiles (usec):
00:25:24.194       |  1.00th=[37487],  5.00th=[39060], 10.00th=[40109], 20.00th=[40633],
00:25:24.194       | 30.00th=[41681], 40.00th=[42206], 50.00th=[42730], 60.00th=[56886],
00:25:24.194       | 70.00th=[60556], 80.00th=[62653], 90.00th=[63701], 95.00th=[64750],
00:25:24.194       | 99.00th=[68682], 99.50th=[77071], 99.90th=[83362], 99.95th=[84411],
00:25:24.194       | 99.99th=[91751]
00:25:24.194     bw (  KiB/s): min=231887, max=395776, per=10.19%, avg=322224.75, stdev=65487.17, samples=20
00:25:24.194     iops        : min=  905, max= 1546, avg=1258.65, stdev=255.87, samples=20
00:25:24.194    lat (msec)   : 50=59.17%, 100=40.83%
00:25:24.194    cpu          : usr=3.02%, sys=5.01%, ctx=3143, majf=0, minf=1
00:25:24.194    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5%
00:25:24.194       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:25:24.194       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
00:25:24.194       issued rwts: total=0,12649,0,0 short=0,0,0,0 dropped=0,0,0,0
00:25:24.194       latency   : target=0, window=0, percentile=100.00%, depth=64
00:25:24.194  job8: (groupid=0, jobs=1): err= 0: pid=3395078: Sat Dec 14 13:51:22 2024
00:25:24.194    write: IOPS=1296, BW=324MiB/s (340MB/s)(3252MiB/10033msec); 0 zone resets
00:25:24.194      slat (usec): min=25, max=11275, avg=764.20, stdev=1420.84
00:25:24.194      clat (usec): min=7621, max=71281, avg=48585.84, stdev=9536.39
00:25:24.194       lat (usec): min=7665, max=71329, avg=49350.04, stdev=9665.92
00:25:24.194      clat percentiles (usec):
00:25:24.194       |  1.00th=[36963],  5.00th=[38536], 10.00th=[39584], 20.00th=[40633],
00:25:24.194       | 30.00th=[41157], 40.00th=[41681], 50.00th=[42206], 60.00th=[50070],
00:25:24.194       | 70.00th=[58459], 80.00th=[60031], 90.00th=[61604], 95.00th=[62653],
00:25:24.194       | 99.00th=[64750], 99.50th=[65799], 99.90th=[67634], 99.95th=[69731],
00:25:24.194       | 99.99th=[70779]
00:25:24.194     bw (  KiB/s): min=264704, max=397312, per=10.48%, avg=331405.05, stdev=59981.44, samples=20
00:25:24.194     iops        : min= 1034, max= 1552, avg=1294.55, stdev=234.30, samples=20
00:25:24.194    lat (msec)   : 10=0.05%, 20=0.08%, 50=59.92%, 100=39.95%
00:25:24.194    cpu          : usr=3.04%, sys=4.90%, ctx=3223, majf=0, minf=1
00:25:24.194    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5%
00:25:24.194       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:25:24.194       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
00:25:24.194       issued rwts: total=0,13007,0,0 short=0,0,0,0 dropped=0,0,0,0
00:25:24.194       latency   : target=0, window=0, percentile=100.00%, depth=64
00:25:24.194  job9: (groupid=0, jobs=1): err= 0: pid=3395079: Sat Dec 14 13:51:22 2024
00:25:24.194    write: IOPS=708, BW=177MiB/s (186MB/s)(1782MiB/10064msec); 0 zone resets
00:25:24.194      slat (usec): min=28, max=27019, avg=1397.88, stdev=3108.34
00:25:24.194      clat (msec): min=13, max=141, avg=88.91, stdev=12.12
00:25:24.194       lat (msec): min=13, max=141, avg=90.31, stdev=12.49
00:25:24.194      clat percentiles (msec):
00:25:24.194       |  1.00th=[   61],  5.00th=[   77], 10.00th=[   78], 20.00th=[   80],
00:25:24.194       | 30.00th=[   81], 40.00th=[   83], 50.00th=[   85], 60.00th=[   97],
00:25:24.194       | 70.00th=[  100], 80.00th=[  101], 90.00th=[  103], 95.00th=[  106],
00:25:24.194       | 99.00th=[  115], 99.50th=[  123], 99.90th=[  132], 99.95th=[  136],
00:25:24.194       | 99.99th=[  142]
00:25:24.194     bw (  KiB/s): min=157184, max=211968, per=5.72%, avg=180889.60, stdev=21041.63, samples=20
00:25:24.194     iops        : min=  614, max=  828, avg=706.60, stdev=82.19, samples=20
00:25:24.194    lat (msec)   : 20=0.11%, 50=0.46%, 100=79.23%, 250=20.20%
00:25:24.194    cpu          : usr=1.62%, sys=3.13%, ctx=1742, majf=0, minf=1
00:25:24.194    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1%
00:25:24.194       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:25:24.194       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
00:25:24.194       issued rwts: total=0,7129,0,0 short=0,0,0,0 dropped=0,0,0,0
00:25:24.194       latency   : target=0, window=0, percentile=100.00%, depth=64
00:25:24.194  job10: (groupid=0, jobs=1): err= 0: pid=3395080: Sat Dec 14 13:51:22 2024
00:25:24.194    write: IOPS=709, BW=177MiB/s (186MB/s)(1784MiB/10062msec); 0 zone resets
00:25:24.194      slat (usec): min=25, max=26732, avg=1396.83, stdev=2948.84
00:25:24.194      clat (msec): min=4, max=134, avg=88.84, stdev=12.39
00:25:24.194       lat (msec): min=4, max=148, avg=90.24, stdev=12.73
00:25:24.194      clat percentiles (msec):
00:25:24.194       |  1.00th=[   61],  5.00th=[   77], 10.00th=[   79], 20.00th=[   80],
00:25:24.194       | 30.00th=[   81], 40.00th=[   83], 50.00th=[   85], 60.00th=[   97],
00:25:24.194       | 70.00th=[  100], 80.00th=[  101], 90.00th=[  103], 95.00th=[  106],
00:25:24.194       | 99.00th=[  112], 99.50th=[  117], 99.90th=[  133], 99.95th=[  136],
00:25:24.194       | 99.99th=[  136]
00:25:24.194     bw (  KiB/s): min=157696, max=216496, per=5.72%, avg=181039.20, stdev=20983.50, samples=20
00:25:24.194     iops        : min=  616, max=  845, avg=707.15, stdev=81.91, samples=20
00:25:24.194    lat (msec)   : 10=0.15%, 20=0.06%, 50=0.52%, 100=79.00%, 250=20.27%
00:25:24.194    cpu          : usr=1.97%, sys=2.86%, ctx=1730, majf=0, minf=1
00:25:24.194    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1%
00:25:24.194       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:25:24.194       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
00:25:24.194       issued rwts: total=0,7134,0,0 short=0,0,0,0 dropped=0,0,0,0
00:25:24.194       latency   : target=0, window=0, percentile=100.00%, depth=64
00:25:24.194  
00:25:24.194  Run status group 0 (all jobs):
00:25:24.194    WRITE: bw=3089MiB/s (3239MB/s), 176MiB/s-577MiB/s (185MB/s-605MB/s), io=30.4GiB (32.6GB), run=10032-10065msec
00:25:24.194  
00:25:24.194  Disk stats (read/write):
00:25:24.194    nvme0n1: ios=49/20348, merge=0/0, ticks=16/1216901, in_queue=1216917, util=96.76%
00:25:24.194    nvme10n1: ios=0/15532, merge=0/0, ticks=0/1216933, in_queue=1216933, util=96.91%
00:25:24.194    nvme1n1: ios=0/24786, merge=0/0, ticks=0/1218847, in_queue=1218847, util=97.24%
00:25:24.194    nvme2n1: ios=0/25580, merge=0/0, ticks=0/1217485, in_queue=1217485, util=97.42%
00:25:24.194    nvme3n1: ios=0/13883, merge=0/0, ticks=0/1214457, in_queue=1214457, util=97.50%
00:25:24.194    nvme4n1: ios=0/45909, merge=0/0, ticks=0/1228399, in_queue=1228399, util=97.90%
00:25:24.194    nvme5n1: ios=0/19927, merge=0/0, ticks=0/1216956, in_queue=1216956, util=98.08%
00:25:24.194    nvme6n1: ios=0/24787, merge=0/0, ticks=0/1219032, in_queue=1219032, util=98.23%
00:25:24.194    nvme7n1: ios=0/25571, merge=0/0, ticks=0/1218837, in_queue=1218837, util=98.69%
00:25:24.194    nvme8n1: ios=0/13938, merge=0/0, ticks=0/1214378, in_queue=1214378, util=98.91%
00:25:24.194    nvme9n1: ios=0/13953, merge=0/0, ticks=0/1214469, in_queue=1214469, util=99.08%
00:25:24.194   13:51:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync
00:25:24.194    13:51:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11
00:25:24.194   13:51:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:25:24.194   13:51:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:25:24.453  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:25:24.454   13:51:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1
00:25:24.454   13:51:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0
00:25:24.454   13:51:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:25:24.454   13:51:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1
00:25:24.454   13:51:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:25:24.454   13:51:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK1
00:25:24.454   13:51:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0
00:25:24.454   13:51:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:25:24.454   13:51:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:24.454   13:51:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:25:24.454   13:51:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:24.454   13:51:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:25:24.454   13:51:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2
00:25:25.391  NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s)
00:25:25.391   13:51:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2
00:25:25.391   13:51:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0
00:25:25.391   13:51:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:25:25.391   13:51:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2
00:25:25.391   13:51:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:25:25.391   13:51:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK2
00:25:25.391   13:51:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0
00:25:25.391   13:51:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2
00:25:25.391   13:51:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:25.391   13:51:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:25:25.391   13:51:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:25.391   13:51:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:25:25.391   13:51:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3
00:25:26.326  NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s)
00:25:26.326   13:51:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3
00:25:26.326   13:51:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0
00:25:26.326   13:51:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:25:26.326   13:51:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3
00:25:26.326   13:51:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:25:26.326   13:51:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK3
00:25:26.326   13:51:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0
00:25:26.326   13:51:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3
00:25:26.326   13:51:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:26.326   13:51:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:25:26.326   13:51:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:26.326   13:51:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:25:26.326   13:51:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4
00:25:27.262  NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s)
00:25:27.262   13:51:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4
00:25:27.262   13:51:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0
00:25:27.262   13:51:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:25:27.262   13:51:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4
00:25:27.521   13:51:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK4
00:25:27.521   13:51:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:25:27.521   13:51:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0
00:25:27.521   13:51:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4
00:25:27.521   13:51:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:27.521   13:51:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:25:27.521   13:51:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:27.521   13:51:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:25:27.521   13:51:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5
00:25:28.458  NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s)
00:25:28.458   13:51:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5
00:25:28.458   13:51:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0
00:25:28.458   13:51:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:25:28.458   13:51:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5
00:25:28.458   13:51:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:25:28.458   13:51:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK5
00:25:28.458   13:51:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0
00:25:28.458   13:51:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5
00:25:28.458   13:51:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:28.458   13:51:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:25:28.458   13:51:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:28.458   13:51:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:25:28.458   13:51:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6
00:25:29.394  NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s)
00:25:29.394   13:51:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6
00:25:29.394   13:51:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0
00:25:29.394   13:51:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:25:29.394   13:51:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6
00:25:29.394   13:51:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:25:29.394   13:51:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK6
00:25:29.394   13:51:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0
00:25:29.394   13:51:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6
00:25:29.394   13:51:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:29.394   13:51:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:25:29.394   13:51:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:29.394   13:51:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:25:29.394   13:51:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7
00:25:30.332  NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s)
00:25:30.332   13:51:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7
00:25:30.332   13:51:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0
00:25:30.332   13:51:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:25:30.332   13:51:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7
00:25:30.332   13:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:25:30.332   13:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK7
00:25:30.332   13:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0
00:25:30.332   13:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7
00:25:30.332   13:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:30.332   13:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:25:30.332   13:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:30.332   13:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:25:30.332   13:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8
00:25:31.710  NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s)
00:25:31.710   13:51:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8
00:25:31.710   13:51:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0
00:25:31.710   13:51:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:25:31.710   13:51:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8
00:25:31.710   13:51:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK8
00:25:31.710   13:51:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:25:31.710   13:51:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0
00:25:31.711   13:51:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8
00:25:31.711   13:51:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:31.711   13:51:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:25:31.711   13:51:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:31.711   13:51:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:25:31.711   13:51:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9
00:25:32.279  NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s)
00:25:32.279   13:51:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9
00:25:32.279   13:51:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0
00:25:32.279   13:51:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:25:32.279   13:51:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9
00:25:32.538   13:51:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK9
00:25:32.538   13:51:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:25:32.538   13:51:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0
00:25:32.538   13:51:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9
00:25:32.538   13:51:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:32.538   13:51:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:25:32.538   13:51:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:32.538   13:51:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:25:32.538   13:51:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10
00:25:33.475  NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s)
00:25:33.475   13:51:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10
00:25:33.475   13:51:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0
00:25:33.475   13:51:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:25:33.475   13:51:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10
00:25:33.475   13:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:25:33.475   13:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK10
00:25:33.476   13:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0
00:25:33.476   13:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10
00:25:33.476   13:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:33.476   13:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:25:33.476   13:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:33.476   13:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:25:33.476   13:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11
00:25:34.413  NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s)
00:25:34.413   13:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11
00:25:34.413   13:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0
00:25:34.413   13:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:25:34.413   13:51:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11
00:25:34.414   13:51:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK11
00:25:34.414   13:51:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:25:34.414   13:51:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0
00:25:34.414   13:51:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11
00:25:34.414   13:51:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:34.414   13:51:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:25:34.414   13:51:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:34.414   13:51:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state
00:25:34.414   13:51:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT
00:25:34.414   13:51:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini
00:25:34.414   13:51:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup
00:25:34.414   13:51:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync
00:25:34.414   13:51:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' rdma == tcp ']'
00:25:34.414   13:51:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' rdma == rdma ']'
00:25:34.414   13:51:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e
00:25:34.414   13:51:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20}
00:25:34.414   13:51:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma
00:25:34.414  rmmod nvme_rdma
00:25:34.414  rmmod nvme_fabrics
00:25:34.414   13:51:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:25:34.414   13:51:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e
00:25:34.414   13:51:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0
00:25:34.414   13:51:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 3386283 ']'
00:25:34.414   13:51:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 3386283
00:25:34.414   13:51:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' -z 3386283 ']'
00:25:34.414   13:51:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # kill -0 3386283
00:25:34.414    13:51:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # uname
00:25:34.414   13:51:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:25:34.414    13:51:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3386283
00:25:34.673   13:51:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:25:34.673   13:51:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:25:34.673   13:51:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3386283'
00:25:34.673  killing process with pid 3386283
00:25:34.673   13:51:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@973 -- # kill 3386283
00:25:34.673   13:51:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@978 -- # wait 3386283
00:25:38.911   13:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:25:38.911   13:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]]
00:25:38.911  
00:25:38.911  real	1m19.431s
00:25:38.911  user	5m8.466s
00:25:38.911  sys	0m19.168s
00:25:38.911   13:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1130 -- # xtrace_disable
00:25:38.911   13:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:25:38.911  ************************************
00:25:38.911  END TEST nvmf_multiconnection
00:25:38.911  ************************************
00:25:38.911   13:51:37 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma
00:25:38.911   13:51:37 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:25:38.911   13:51:37 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:25:38.911   13:51:37 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:25:38.911  ************************************
00:25:38.911  START TEST nvmf_initiator_timeout
00:25:38.911  ************************************
00:25:38.911   13:51:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma
00:25:38.911  * Looking for test storage...
00:25:38.911  * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target
00:25:38.911    13:51:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:25:38.911     13:51:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lcov --version
00:25:38.911     13:51:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:25:38.911    13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:25:38.911    13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:25:38.911    13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l
00:25:38.911    13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l
00:25:38.911    13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-:
00:25:38.911    13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1
00:25:38.911    13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-:
00:25:38.911    13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2
00:25:38.911    13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<'
00:25:38.911    13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2
00:25:38.911    13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1
00:25:38.911    13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:25:38.911    13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in
00:25:38.911    13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1
00:25:38.911    13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 ))
00:25:38.911    13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:25:38.911     13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1
00:25:38.911     13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1
00:25:38.911     13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:25:38.911     13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1
00:25:38.911    13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1
00:25:38.911     13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2
00:25:38.911     13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2
00:25:38.911     13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:25:38.911     13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2
00:25:38.911    13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2
00:25:38.911    13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:25:38.911    13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:25:38.911    13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0
00:25:38.912    13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:25:38.912    13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:25:38.912  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:38.912  		--rc genhtml_branch_coverage=1
00:25:38.912  		--rc genhtml_function_coverage=1
00:25:38.912  		--rc genhtml_legend=1
00:25:38.912  		--rc geninfo_all_blocks=1
00:25:38.912  		--rc geninfo_unexecuted_blocks=1
00:25:38.912  		
00:25:38.912  		'
00:25:38.912    13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:25:38.912  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:38.912  		--rc genhtml_branch_coverage=1
00:25:38.912  		--rc genhtml_function_coverage=1
00:25:38.912  		--rc genhtml_legend=1
00:25:38.912  		--rc geninfo_all_blocks=1
00:25:38.912  		--rc geninfo_unexecuted_blocks=1
00:25:38.912  		
00:25:38.912  		'
00:25:38.912    13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:25:38.912  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:38.912  		--rc genhtml_branch_coverage=1
00:25:38.912  		--rc genhtml_function_coverage=1
00:25:38.912  		--rc genhtml_legend=1
00:25:38.912  		--rc geninfo_all_blocks=1
00:25:38.912  		--rc geninfo_unexecuted_blocks=1
00:25:38.912  		
00:25:38.912  		'
00:25:38.912    13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:25:38.912  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:38.912  		--rc genhtml_branch_coverage=1
00:25:38.912  		--rc genhtml_function_coverage=1
00:25:38.912  		--rc genhtml_legend=1
00:25:38.912  		--rc geninfo_all_blocks=1
00:25:38.912  		--rc geninfo_unexecuted_blocks=1
00:25:38.912  		
00:25:38.912  		'
00:25:38.912   13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh
00:25:38.912     13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s
00:25:38.912    13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:25:38.912    13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:25:38.912    13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:25:38.912    13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:25:38.912    13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:25:38.912    13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:25:38.912    13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:25:38.912    13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:25:38.912    13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:25:38.912     13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:25:38.912    13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:25:38.912    13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e
00:25:38.912    13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:25:38.912    13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:25:38.912    13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:25:38.912    13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:25:38.912    13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh
00:25:38.912     13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob
00:25:38.912     13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:25:38.912     13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:25:38.912     13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:25:38.912      13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:25:38.912      13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:25:38.912      13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:25:38.912      13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH
00:25:38.912      13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:25:38.912    13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0
00:25:38.912    13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:25:38.912    13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:25:38.912    13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:25:38.912    13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:25:38.912    13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:25:38.912    13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:25:38.912  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:25:38.912    13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:25:38.912    13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:25:38.912    13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0
00:25:38.912   13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64
00:25:38.912   13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:25:38.912   13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit
00:25:38.912   13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z rdma ']'
00:25:38.912   13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:25:38.912   13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs
00:25:38.912   13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no
00:25:38.912   13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns
00:25:38.912   13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:25:38.912   13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:25:38.912    13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:25:38.912   13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:25:38.912   13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:25:38.912   13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable
00:25:38.912   13:51:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x
00:25:45.507   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:25:45.507   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=()
00:25:45.507   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs
00:25:45.507   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=()
00:25:45.507   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:25:45.507   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=()
00:25:45.507   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers
00:25:45.507   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=()
00:25:45.507   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs
00:25:45.507   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=()
00:25:45.507   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810
00:25:45.507   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=()
00:25:45.507   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722
00:25:45.507   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=()
00:25:45.507   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx
00:25:45.507   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:25:45.507   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:25:45.507   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:25:45.507   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:25:45.507   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:25:45.507   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:25:45.507   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:25:45.507   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:25:45.507   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # [[ rdma == rdma ]]
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}")
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}")
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]]
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}")
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)'
00:25:45.508  Found 0000:d9:00.0 (0x15b3 - 0x1015)
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)'
00:25:45.508  Found 0000:d9:00.1 (0x15b3 - 0x1015)
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]]
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0'
00:25:45.508  Found net devices under 0000:d9:00.0: mlx_0_0
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1'
00:25:45.508  Found net devices under 0000:d9:00.1: mlx_0_1
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # is_hw=yes
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@445 -- # [[ rdma == tcp ]]
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@447 -- # [[ rdma == rdma ]]
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # rdma_device_init
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@529 -- # load_ib_rdma_modules
00:25:45.508    13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@62 -- # uname
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']'
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@66 -- # modprobe ib_cm
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@67 -- # modprobe ib_core
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@68 -- # modprobe ib_umad
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@69 -- # modprobe ib_uverbs
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@70 -- # modprobe iw_cm
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@71 -- # modprobe rdma_cm
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@72 -- # modprobe rdma_ucm
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@530 -- # allocate_nic_ips
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR ))
00:25:45.508    13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@77 -- # get_rdma_if_list
00:25:45.508    13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:25:45.508    13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:25:45.508     13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:25:45.508     13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:25:45.508    13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:25:45.508    13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:25:45.508    13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:25:45.508    13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:25:45.508    13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@108 -- # echo mlx_0_0
00:25:45.508    13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@109 -- # continue 2
00:25:45.508    13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:25:45.508    13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:25:45.508    13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:25:45.508    13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:25:45.508    13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:25:45.508    13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@108 -- # echo mlx_0_1
00:25:45.508    13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@109 -- # continue 2
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:25:45.508    13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0
00:25:45.508    13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:25:45.508    13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:25:45.508    13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # awk '{print $4}'
00:25:45.508    13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # cut -d/ -f1
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@78 -- # ip=192.168.100.8
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]]
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@85 -- # ip addr show mlx_0_0
00:25:45.508  6: mlx_0_0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:25:45.508      link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff
00:25:45.508      altname enp217s0f0np0
00:25:45.508      altname ens818f0np0
00:25:45.508      inet 192.168.100.8/24 scope global mlx_0_0
00:25:45.508         valid_lft forever preferred_lft forever
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:25:45.508    13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1
00:25:45.508    13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:25:45.508    13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:25:45.508    13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # awk '{print $4}'
00:25:45.508    13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # cut -d/ -f1
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@78 -- # ip=192.168.100.9
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]]
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@85 -- # ip addr show mlx_0_1
00:25:45.508  7: mlx_0_1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:25:45.508      link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff
00:25:45.508      altname enp217s0f1np1
00:25:45.508      altname ens818f1np1
00:25:45.508      inet 192.168.100.9/24 scope global mlx_0_1
00:25:45.508         valid_lft forever preferred_lft forever
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # return 0
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma'
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]]
00:25:45.508    13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # get_available_rdma_ips
00:25:45.508     13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@90 -- # get_rdma_if_list
00:25:45.508     13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:25:45.508     13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:25:45.508      13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:25:45.508      13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:25:45.508     13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:25:45.508     13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:25:45.508     13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:25:45.508     13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:25:45.508     13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@108 -- # echo mlx_0_0
00:25:45.508     13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@109 -- # continue 2
00:25:45.508     13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:25:45.508     13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:25:45.508     13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:25:45.508     13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:25:45.508     13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:25:45.508     13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@108 -- # echo mlx_0_1
00:25:45.508     13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@109 -- # continue 2
00:25:45.508    13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:25:45.508    13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0
00:25:45.508    13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:25:45.508    13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:25:45.508    13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # awk '{print $4}'
00:25:45.508    13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # cut -d/ -f1
00:25:45.508    13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:25:45.508    13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1
00:25:45.508    13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:25:45.508    13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:25:45.508    13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # awk '{print $4}'
00:25:45.508    13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # cut -d/ -f1
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8
00:25:45.508  192.168.100.9'
00:25:45.508    13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@485 -- # echo '192.168.100.8
00:25:45.508  192.168.100.9'
00:25:45.508    13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@485 -- # head -n 1
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8
00:25:45.508    13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@486 -- # echo '192.168.100.8
00:25:45.508  192.168.100.9'
00:25:45.508    13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@486 -- # tail -n +2
00:25:45.508    13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@486 -- # head -n 1
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']'
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024'
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' rdma == tcp ']'
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' rdma == rdma ']'
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-rdma
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=3402336
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 3402336
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # '[' -z 3402336 ']'
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # local max_retries=100
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:25:45.508  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@844 -- # xtrace_disable
00:25:45.508   13:51:44 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x
00:25:45.508  [2024-12-14 13:51:44.882068] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:25:45.508  [2024-12-14 13:51:44.882166] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:25:45.508  [2024-12-14 13:51:45.014885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:25:45.508  [2024-12-14 13:51:45.113079] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:25:45.508  [2024-12-14 13:51:45.113127] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:25:45.508  [2024-12-14 13:51:45.113139] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:25:45.508  [2024-12-14 13:51:45.113151] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:25:45.508  [2024-12-14 13:51:45.113160] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:25:45.508  [2024-12-14 13:51:45.115569] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:25:45.508  [2024-12-14 13:51:45.115644] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:25:45.508  [2024-12-14 13:51:45.115748] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:25:45.508  [2024-12-14 13:51:45.115757] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:25:46.074   13:51:45 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:25:46.074   13:51:45 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@868 -- # return 0
00:25:46.074   13:51:45 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:25:46.074   13:51:45 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@732 -- # xtrace_disable
00:25:46.074   13:51:45 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x
00:25:46.074   13:51:45 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:25:46.074   13:51:45 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT
00:25:46.074   13:51:45 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0
00:25:46.074   13:51:45 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:46.075   13:51:45 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x
00:25:46.333  Malloc0
00:25:46.333   13:51:45 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:46.333   13:51:45 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30
00:25:46.333   13:51:45 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:46.333   13:51:45 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x
00:25:46.333  Delay0
00:25:46.333   13:51:45 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:46.333   13:51:45 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192
00:25:46.333   13:51:45 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:46.333   13:51:45 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x
00:25:46.333  [2024-12-14 13:51:45.873818] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028fc0/0x7f2f3f948940) succeed.
00:25:46.333  [2024-12-14 13:51:45.883724] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000029140/0x7f2f3f904940) succeed.
00:25:46.592   13:51:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:46.592   13:51:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME
00:25:46.592   13:51:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:46.592   13:51:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x
00:25:46.592   13:51:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:46.593   13:51:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:25:46.593   13:51:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:46.593   13:51:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x
00:25:46.593   13:51:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:46.593   13:51:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420
00:25:46.593   13:51:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:46.593   13:51:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x
00:25:46.593  [2024-12-14 13:51:46.171679] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 ***
00:25:46.593   13:51:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:46.593   13:51:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420
00:25:47.530   13:51:47 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME
00:25:47.530   13:51:47 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # local i=0
00:25:47.530   13:51:47 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:25:47.530   13:51:47 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:25:47.530   13:51:47 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # sleep 2
00:25:49.436   13:51:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:25:49.436    13:51:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:25:49.436    13:51:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:25:49.715   13:51:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:25:49.715   13:51:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:25:49.715   13:51:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # return 0
00:25:49.715   13:51:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=3403137
00:25:49.715   13:51:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v
00:25:49.715   13:51:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3
00:25:49.715  [global]
00:25:49.715  thread=1
00:25:49.715  invalidate=1
00:25:49.715  rw=write
00:25:49.715  time_based=1
00:25:49.715  runtime=60
00:25:49.715  ioengine=libaio
00:25:49.715  direct=1
00:25:49.715  bs=4096
00:25:49.715  iodepth=1
00:25:49.715  norandommap=0
00:25:49.715  numjobs=1
00:25:49.715  
00:25:49.715  verify_dump=1
00:25:49.715  verify_backlog=512
00:25:49.715  verify_state_save=0
00:25:49.715  do_verify=1
00:25:49.715  verify=crc32c-intel
00:25:49.715  [job0]
00:25:49.715  filename=/dev/nvme0n1
00:25:49.715  Could not set queue depth (nvme0n1)
00:25:49.982  job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:25:49.982  fio-3.35
00:25:49.982  Starting 1 thread
00:25:52.518   13:51:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000
00:25:52.518   13:51:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:52.518   13:51:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x
00:25:52.518  true
00:25:52.518   13:51:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:52.518   13:51:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000
00:25:52.518   13:51:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:52.518   13:51:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x
00:25:52.518  true
00:25:52.518   13:51:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:52.518   13:51:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000
00:25:52.518   13:51:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:52.518   13:51:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x
00:25:52.518  true
00:25:52.518   13:51:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:52.518   13:51:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000
00:25:52.518   13:51:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:52.518   13:51:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x
00:25:52.518  true
00:25:52.518   13:51:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:52.518   13:51:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3
00:25:55.809   13:51:55 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30
00:25:55.809   13:51:55 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:55.809   13:51:55 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x
00:25:55.809  true
00:25:55.809   13:51:55 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:55.809   13:51:55 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30
00:25:55.809   13:51:55 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:55.809   13:51:55 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x
00:25:55.809  true
00:25:55.809   13:51:55 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:55.809   13:51:55 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30
00:25:55.809   13:51:55 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:55.809   13:51:55 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x
00:25:55.809  true
00:25:55.809   13:51:55 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:55.809   13:51:55 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30
00:25:55.809   13:51:55 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:55.809   13:51:55 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x
00:25:55.809  true
00:25:55.809   13:51:55 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:55.809   13:51:55 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0
00:25:55.809   13:51:55 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 3403137
00:26:52.145  
00:26:52.145  job0: (groupid=0, jobs=1): err= 0: pid=3403296: Sat Dec 14 13:52:49 2024
00:26:52.145    read: IOPS=1154, BW=4619KiB/s (4730kB/s)(271MiB/60000msec)
00:26:52.145      slat (usec): min=8, max=11857, avg= 9.66, stdev=62.89
00:26:52.145      clat (usec): min=81, max=42636k, avg=728.51, stdev=161974.73
00:26:52.145       lat (usec): min=103, max=42636k, avg=738.17, stdev=161974.74
00:26:52.145      clat percentiles (usec):
00:26:52.145       |  1.00th=[   99],  5.00th=[  102], 10.00th=[  104], 20.00th=[  106],
00:26:52.145       | 30.00th=[  109], 40.00th=[  111], 50.00th=[  113], 60.00th=[  115],
00:26:52.145       | 70.00th=[  117], 80.00th=[  120], 90.00th=[  124], 95.00th=[  127],
00:26:52.145       | 99.00th=[  135], 99.50th=[  139], 99.90th=[  149], 99.95th=[  163],
00:26:52.145       | 99.99th=[  302]
00:26:52.145    write: IOPS=1160, BW=4642KiB/s (4754kB/s)(272MiB/60000msec); 0 zone resets
00:26:52.145      slat (usec): min=8, max=317, avg=11.96, stdev= 2.33
00:26:52.145      clat (usec): min=76, max=377, avg=109.98, stdev= 8.99
00:26:52.145       lat (usec): min=101, max=458, avg=121.94, stdev= 9.39
00:26:52.145      clat percentiles (usec):
00:26:52.146       |  1.00th=[   96],  5.00th=[   99], 10.00th=[  101], 20.00th=[  103],
00:26:52.146       | 30.00th=[  105], 40.00th=[  108], 50.00th=[  110], 60.00th=[  112],
00:26:52.146       | 70.00th=[  114], 80.00th=[  117], 90.00th=[  121], 95.00th=[  125],
00:26:52.146       | 99.00th=[  133], 99.50th=[  139], 99.90th=[  163], 99.95th=[  206],
00:26:52.146       | 99.99th=[  310]
00:26:52.146     bw (  KiB/s): min= 3200, max=16936, per=100.00%, avg=15519.31, stdev=2565.99, samples=35
00:26:52.146     iops        : min=  800, max= 4234, avg=3879.83, stdev=641.50, samples=35
00:26:52.146    lat (usec)   : 100=5.25%, 250=94.72%, 500=0.03%
00:26:52.146    lat (msec)   : 2=0.01%, >=2000=0.01%
00:26:52.146    cpu          : usr=1.74%, sys=3.12%, ctx=138931, majf=0, minf=107
00:26:52.146    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:26:52.146       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:52.146       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:52.146       issued rwts: total=69289,69632,0,0 short=0,0,0,0 dropped=0,0,0,0
00:26:52.146       latency   : target=0, window=0, percentile=100.00%, depth=1
00:26:52.146  
00:26:52.146  Run status group 0 (all jobs):
00:26:52.146     READ: bw=4619KiB/s (4730kB/s), 4619KiB/s-4619KiB/s (4730kB/s-4730kB/s), io=271MiB (284MB), run=60000-60000msec
00:26:52.146    WRITE: bw=4642KiB/s (4754kB/s), 4642KiB/s-4642KiB/s (4754kB/s-4754kB/s), io=272MiB (285MB), run=60000-60000msec
00:26:52.146  
00:26:52.146  Disk stats (read/write):
00:26:52.146    nvme0n1: ios=69327/69120, merge=0/0, ticks=7190/7071, in_queue=14261, util=99.81%
00:26:52.146   13:52:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:26:52.146  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:26:52.146   13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:26:52.146   13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # local i=0
00:26:52.146   13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:26:52.146   13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME
00:26:52.146   13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:26:52.146   13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME
00:26:52.146   13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1235 -- # return 0
00:26:52.146   13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']'
00:26:52.146   13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected'
00:26:52.146  nvmf hotplug test: fio successful as expected
00:26:52.146   13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:26:52.146   13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:52.146   13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x
00:26:52.146   13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:52.146   13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state
00:26:52.146   13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT
00:26:52.146   13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini
00:26:52.146   13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup
00:26:52.146   13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync
00:26:52.146   13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' rdma == tcp ']'
00:26:52.146   13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' rdma == rdma ']'
00:26:52.146   13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e
00:26:52.146   13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20}
00:26:52.146   13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma
00:26:52.146  rmmod nvme_rdma
00:26:52.146  rmmod nvme_fabrics
00:26:52.146   13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:26:52.146   13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e
00:26:52.146   13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0
00:26:52.146   13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 3402336 ']'
00:26:52.146   13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 3402336
00:26:52.146   13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' -z 3402336 ']'
00:26:52.146   13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # kill -0 3402336
00:26:52.146    13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # uname
00:26:52.146   13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:26:52.146    13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3402336
00:26:52.146   13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:26:52.146   13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:26:52.146   13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3402336'
00:26:52.146  killing process with pid 3402336
00:26:52.146   13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # kill 3402336
00:26:52.146   13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@978 -- # wait 3402336
00:26:53.083   13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:26:53.083   13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]]
00:26:53.083  
00:26:53.083  real	1m14.763s
00:26:53.083  user	4m39.124s
00:26:53.083  sys	0m7.967s
00:26:53.083   13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable
00:26:53.083   13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x
00:26:53.083  ************************************
00:26:53.083  END TEST nvmf_initiator_timeout
00:26:53.083  ************************************
00:26:53.083   13:52:52 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]]
00:26:53.083   13:52:52 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' rdma = tcp ']'
00:26:53.083   13:52:52 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@60 -- # [[ rdma == \r\d\m\a ]]
00:26:53.083   13:52:52 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_srq_overwhelm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma
00:26:53.083   13:52:52 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:26:53.083   13:52:52 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:26:53.083   13:52:52 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:26:53.083  ************************************
00:26:53.083  START TEST nvmf_srq_overwhelm
00:26:53.083  ************************************
00:26:53.083   13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma
00:26:53.343  * Looking for test storage...
00:26:53.343  * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target
00:26:53.343    13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:26:53.343     13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1711 -- # lcov --version
00:26:53.343     13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:26:53.343    13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:26:53.343    13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:26:53.343    13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@333 -- # local ver1 ver1_l
00:26:53.343    13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@334 -- # local ver2 ver2_l
00:26:53.343    13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@336 -- # IFS=.-:
00:26:53.343    13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@336 -- # read -ra ver1
00:26:53.343    13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@337 -- # IFS=.-:
00:26:53.343    13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@337 -- # read -ra ver2
00:26:53.343    13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@338 -- # local 'op=<'
00:26:53.343    13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@340 -- # ver1_l=2
00:26:53.343    13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@341 -- # ver2_l=1
00:26:53.343    13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:26:53.343    13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@344 -- # case "$op" in
00:26:53.343    13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@345 -- # : 1
00:26:53.343    13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@364 -- # (( v = 0 ))
00:26:53.343    13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:26:53.343     13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@365 -- # decimal 1
00:26:53.343     13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@353 -- # local d=1
00:26:53.343     13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:26:53.343     13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@355 -- # echo 1
00:26:53.343    13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@365 -- # ver1[v]=1
00:26:53.343     13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@366 -- # decimal 2
00:26:53.343     13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@353 -- # local d=2
00:26:53.343     13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:26:53.343     13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@355 -- # echo 2
00:26:53.343    13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@366 -- # ver2[v]=2
00:26:53.343    13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:26:53.343    13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:26:53.343    13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@368 -- # return 0
00:26:53.343    13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:26:53.343    13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:26:53.343  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:53.343  		--rc genhtml_branch_coverage=1
00:26:53.343  		--rc genhtml_function_coverage=1
00:26:53.343  		--rc genhtml_legend=1
00:26:53.343  		--rc geninfo_all_blocks=1
00:26:53.343  		--rc geninfo_unexecuted_blocks=1
00:26:53.343  		
00:26:53.343  		'
00:26:53.343    13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:26:53.343  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:53.343  		--rc genhtml_branch_coverage=1
00:26:53.343  		--rc genhtml_function_coverage=1
00:26:53.343  		--rc genhtml_legend=1
00:26:53.343  		--rc geninfo_all_blocks=1
00:26:53.343  		--rc geninfo_unexecuted_blocks=1
00:26:53.343  		
00:26:53.343  		'
00:26:53.343    13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:26:53.343  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:53.343  		--rc genhtml_branch_coverage=1
00:26:53.343  		--rc genhtml_function_coverage=1
00:26:53.343  		--rc genhtml_legend=1
00:26:53.343  		--rc geninfo_all_blocks=1
00:26:53.343  		--rc geninfo_unexecuted_blocks=1
00:26:53.343  		
00:26:53.343  		'
00:26:53.343    13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:26:53.343  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:53.343  		--rc genhtml_branch_coverage=1
00:26:53.343  		--rc genhtml_function_coverage=1
00:26:53.343  		--rc genhtml_legend=1
00:26:53.343  		--rc geninfo_all_blocks=1
00:26:53.343  		--rc geninfo_unexecuted_blocks=1
00:26:53.343  		
00:26:53.343  		'
00:26:53.343   13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh
00:26:53.343     13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # uname -s
00:26:53.343    13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:26:53.343    13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:26:53.343    13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:26:53.343    13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:26:53.343    13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:26:53.343    13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:26:53.343    13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:26:53.343    13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:26:53.343    13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:26:53.343     13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:26:53.344    13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:26:53.344    13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e
00:26:53.344    13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:26:53.344    13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:26:53.344    13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:26:53.344    13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:26:53.344    13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh
00:26:53.344     13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@15 -- # shopt -s extglob
00:26:53.344     13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:26:53.344     13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:26:53.344     13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:26:53.344      13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:26:53.344      13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:26:53.344      13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:26:53.344      13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@5 -- # export PATH
00:26:53.344      13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:26:53.344    13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@51 -- # : 0
00:26:53.344    13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:26:53.344    13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:26:53.344    13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:26:53.344    13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:26:53.344    13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:26:53.344    13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:26:53.344  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:26:53.344    13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:26:53.344    13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:26:53.344    13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@55 -- # have_pci_nics=0
00:26:53.344   13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@11 -- # MALLOC_BDEV_SIZE=64
00:26:53.344   13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:26:53.344   13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@13 -- # NVME_CONNECT='nvme connect -i 16'
00:26:53.344   13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@15 -- # nvmftestinit
00:26:53.344   13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@469 -- # '[' -z rdma ']'
00:26:53.344   13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:26:53.344   13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@476 -- # prepare_net_devs
00:26:53.344   13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@438 -- # local -g is_hw=no
00:26:53.344   13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@440 -- # remove_spdk_ns
00:26:53.344   13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:26:53.344   13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:26:53.344    13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:26:53.344   13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:26:53.344   13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:26:53.344   13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@309 -- # xtrace_disable
00:26:53.344   13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x
00:26:59.911   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:26:59.911   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # pci_devs=()
00:26:59.911   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # local -a pci_devs
00:26:59.911   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@316 -- # pci_net_devs=()
00:26:59.911   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:26:59.911   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # pci_drivers=()
00:26:59.911   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # local -A pci_drivers
00:26:59.911   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@319 -- # net_devs=()
00:26:59.911   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@319 -- # local -ga net_devs
00:26:59.911   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # e810=()
00:26:59.911   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # local -ga e810
00:26:59.911   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # x722=()
00:26:59.911   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # local -ga x722
00:26:59.911   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # mlx=()
00:26:59.911   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # local -ga mlx
00:26:59.911   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:26:59.911   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:26:59.911   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:26:59.911   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:26:59.911   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:26:59.911   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:26:59.911   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:26:59.911   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:26:59.911   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:26:59.911   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:26:59.911   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:26:59.911   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:26:59.911   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:26:59.911   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@347 -- # [[ rdma == rdma ]]
00:26:59.911   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}")
00:26:59.911   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}")
00:26:59.911   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]]
00:26:59.911   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}")
00:26:59.911   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:26:59.911   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:26:59.911   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)'
00:26:59.911  Found 0000:d9:00.0 (0x15b3 - 0x1015)
00:26:59.911   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:26:59.912   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:26:59.912   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:26:59.912   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:26:59.912   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:26:59.912   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:26:59.912   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:26:59.912   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)'
00:26:59.912  Found 0000:d9:00.1 (0x15b3 - 0x1015)
00:26:59.912   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:26:59.912   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:26:59.912   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:26:59.912   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:26:59.912   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:26:59.912   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:26:59.912   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:26:59.912   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]]
00:26:59.912   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:26:59.912   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:26:59.912   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:26:59.912   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:26:59.912   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:26:59.912   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0'
00:26:59.912  Found net devices under 0000:d9:00.0: mlx_0_0
00:26:59.912   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:26:59.912   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:26:59.912   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:26:59.912   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:26:59.912   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:26:59.912   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:26:59.912   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1'
00:26:59.912  Found net devices under 0000:d9:00.1: mlx_0_1
00:26:59.912   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:26:59.912   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:26:59.912   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@442 -- # is_hw=yes
00:26:59.912   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:26:59.912   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@445 -- # [[ rdma == tcp ]]
00:26:59.912   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@447 -- # [[ rdma == rdma ]]
00:26:59.912   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@448 -- # rdma_device_init
00:26:59.912   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@529 -- # load_ib_rdma_modules
00:26:59.912    13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # uname
00:26:59.912   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']'
00:26:59.912   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@66 -- # modprobe ib_cm
00:26:59.912   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@67 -- # modprobe ib_core
00:26:59.912   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@68 -- # modprobe ib_umad
00:26:59.912   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@69 -- # modprobe ib_uverbs
00:26:59.912   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@70 -- # modprobe iw_cm
00:26:59.912   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@71 -- # modprobe rdma_cm
00:26:59.912   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@72 -- # modprobe rdma_ucm
00:26:59.912   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@530 -- # allocate_nic_ips
00:26:59.912   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR ))
00:26:59.912    13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # get_rdma_if_list
00:26:59.912    13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:26:59.912    13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:26:59.912     13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:26:59.912     13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:26:59.912    13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:26:59.912    13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:26:59.912    13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:26:59.912    13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:26:59.912    13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_0
00:26:59.912    13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2
00:26:59.912    13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:26:59.912    13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:26:59.912    13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:26:59.912    13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:26:59.912    13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:26:59.912    13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_1
00:26:59.912    13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2
00:26:59.912   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:26:59.912    13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0
00:26:59.912    13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:26:59.912    13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:26:59.912    13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}'
00:26:59.912    13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1
00:26:59.912   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # ip=192.168.100.8
00:26:59.912   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]]
00:26:59.912   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@85 -- # ip addr show mlx_0_0
00:26:59.912  6: mlx_0_0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:26:59.912      link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff
00:26:59.912      altname enp217s0f0np0
00:26:59.912      altname ens818f0np0
00:26:59.912      inet 192.168.100.8/24 scope global mlx_0_0
00:26:59.912         valid_lft forever preferred_lft forever
00:26:59.912   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:26:59.912    13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1
00:26:59.912    13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:26:59.912    13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:26:59.912    13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}'
00:26:59.912    13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1
00:26:59.912   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # ip=192.168.100.9
00:26:59.912   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]]
00:26:59.912   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@85 -- # ip addr show mlx_0_1
00:26:59.912  7: mlx_0_1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:26:59.912      link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff
00:26:59.912      altname enp217s0f1np1
00:26:59.912      altname ens818f1np1
00:26:59.912      inet 192.168.100.9/24 scope global mlx_0_1
00:26:59.912         valid_lft forever preferred_lft forever
00:26:59.912   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@450 -- # return 0
00:26:59.912   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:26:59.912   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma'
00:26:59.912   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]]
00:26:59.912    13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # get_available_rdma_ips
00:26:59.912     13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # get_rdma_if_list
00:26:59.912     13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:26:59.912     13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:26:59.912      13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:26:59.912      13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:26:59.912     13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:26:59.912     13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:26:59.912     13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:26:59.912     13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:26:59.912     13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_0
00:26:59.912     13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2
00:26:59.912     13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:26:59.912     13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:26:59.913     13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:26:59.913     13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:26:59.913     13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:26:59.913     13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_1
00:26:59.913     13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2
00:26:59.913    13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:26:59.913    13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0
00:26:59.913    13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:26:59.913    13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:26:59.913    13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}'
00:26:59.913    13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1
00:26:59.913    13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:26:59.913    13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1
00:26:59.913    13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:26:59.913    13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:26:59.913    13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}'
00:26:59.913    13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1
00:26:59.913   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8
00:26:59.913  192.168.100.9'
00:26:59.913    13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@485 -- # echo '192.168.100.8
00:26:59.913  192.168.100.9'
00:26:59.913    13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@485 -- # head -n 1
00:26:59.913   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8
00:26:59.913    13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # echo '192.168.100.8
00:26:59.913  192.168.100.9'
00:26:59.913    13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # tail -n +2
00:26:59.913    13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # head -n 1
00:26:59.913   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9
00:26:59.913   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']'
00:26:59.913   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024'
00:26:59.913   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@496 -- # '[' rdma == tcp ']'
00:26:59.913   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@496 -- # '[' rdma == rdma ']'
00:26:59.913   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@502 -- # modprobe nvme-rdma
00:26:59.913   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@17 -- # nvmfappstart -m 0xF
00:26:59.913   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:26:59.913   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@726 -- # xtrace_disable
00:26:59.913   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x
00:26:59.913   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@509 -- # nvmfpid=3416907
00:26:59.913   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:26:59.913   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@510 -- # waitforlisten 3416907
00:26:59.913   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@835 -- # '[' -z 3416907 ']'
00:26:59.913   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:26:59.913   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@840 -- # local max_retries=100
00:26:59.913   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:26:59.913  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:26:59.913   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@844 -- # xtrace_disable
00:26:59.913   13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x
00:27:00.172  [2024-12-14 13:52:59.706997] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:27:00.172  [2024-12-14 13:52:59.707086] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:27:00.172  [2024-12-14 13:52:59.834945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:27:00.430  [2024-12-14 13:52:59.934597] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:27:00.430  [2024-12-14 13:52:59.934649] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:27:00.430  [2024-12-14 13:52:59.934661] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:27:00.430  [2024-12-14 13:52:59.934691] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:27:00.430  [2024-12-14 13:52:59.934701] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:27:00.430  [2024-12-14 13:52:59.937391] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:27:00.430  [2024-12-14 13:52:59.937464] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:27:00.430  [2024-12-14 13:52:59.937558] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:27:00.430  [2024-12-14 13:52:59.937566] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:27:00.997   13:53:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:27:00.997   13:53:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@868 -- # return 0
00:27:00.997   13:53:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:27:00.997   13:53:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@732 -- # xtrace_disable
00:27:00.997   13:53:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x
00:27:00.997   13:53:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:27:00.997   13:53:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -s 1024
00:27:00.997   13:53:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:00.997   13:53:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x
00:27:00.997  [2024-12-14 13:53:00.613412] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028540/0x7fabf475a940) succeed.
00:27:00.997  [2024-12-14 13:53:00.623432] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000286c0/0x7fabf4714940) succeed.
00:27:00.997   13:53:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:00.997    13:53:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # seq 0 5
00:27:00.997   13:53:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5)
00:27:00.997   13:53:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000000
00:27:00.997   13:53:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:00.997   13:53:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x
00:27:00.997   13:53:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:00.997   13:53:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0
00:27:00.997   13:53:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:00.997   13:53:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x
00:27:01.256  Malloc0
00:27:01.256   13:53:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:01.256   13:53:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0
00:27:01.256   13:53:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:01.256   13:53:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x
00:27:01.256   13:53:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:01.256   13:53:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420
00:27:01.256   13:53:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:01.256   13:53:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x
00:27:01.256  [2024-12-14 13:53:00.797363] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 ***
00:27:01.256   13:53:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:01.256   13:53:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode0 -a 192.168.100.8 -s 4420
00:27:02.190   13:53:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme0n1
00:27:02.190   13:53:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0
00:27:02.190   13:53:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME
00:27:02.190   13:53:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1
00:27:02.190   13:53:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME
00:27:02.190   13:53:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1
00:27:02.190   13:53:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0
00:27:02.190   13:53:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5)
00:27:02.190   13:53:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:27:02.190   13:53:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:02.190   13:53:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x
00:27:02.190   13:53:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:02.190   13:53:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1
00:27:02.191   13:53:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:02.191   13:53:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x
00:27:02.191  Malloc1
00:27:02.191   13:53:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:02.191   13:53:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1
00:27:02.191   13:53:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:02.191   13:53:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x
00:27:02.191   13:53:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:02.191   13:53:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420
00:27:02.191   13:53:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:02.191   13:53:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x
00:27:02.191   13:53:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:02.191   13:53:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420
00:27:03.566   13:53:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme1n1
00:27:03.566   13:53:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0
00:27:03.566   13:53:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME
00:27:03.566   13:53:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme1n1
00:27:03.566   13:53:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME
00:27:03.566   13:53:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme1n1
00:27:03.566   13:53:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0
00:27:03.566   13:53:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5)
00:27:03.566   13:53:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002
00:27:03.566   13:53:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:03.566   13:53:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x
00:27:03.566   13:53:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:03.566   13:53:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2
00:27:03.566   13:53:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:03.566   13:53:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x
00:27:03.566  Malloc2
00:27:03.566   13:53:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:03.566   13:53:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2
00:27:03.566   13:53:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:03.566   13:53:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x
00:27:03.566   13:53:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:03.566   13:53:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420
00:27:03.566   13:53:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:03.566   13:53:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x
00:27:03.566   13:53:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:03.566   13:53:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420
00:27:04.501   13:53:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme2n1
00:27:04.501   13:53:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0
00:27:04.501   13:53:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME
00:27:04.501   13:53:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme2n1
00:27:04.501   13:53:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME
00:27:04.501   13:53:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme2n1
00:27:04.501   13:53:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0
00:27:04.501   13:53:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5)
00:27:04.501   13:53:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003
00:27:04.501   13:53:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:04.501   13:53:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x
00:27:04.501   13:53:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:04.501   13:53:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3
00:27:04.501   13:53:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:04.501   13:53:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x
00:27:04.501  Malloc3
00:27:04.501   13:53:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:04.501   13:53:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3
00:27:04.501   13:53:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:04.501   13:53:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x
00:27:04.501   13:53:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:04.501   13:53:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420
00:27:04.501   13:53:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:04.501   13:53:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x
00:27:04.501   13:53:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:04.501   13:53:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420
00:27:05.436   13:53:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme3n1
00:27:05.436   13:53:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0
00:27:05.436   13:53:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME
00:27:05.436   13:53:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme3n1
00:27:05.436   13:53:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME
00:27:05.436   13:53:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme3n1
00:27:05.436   13:53:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0
00:27:05.436   13:53:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5)
00:27:05.436   13:53:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004
00:27:05.436   13:53:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:05.436   13:53:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x
00:27:05.436   13:53:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:05.436   13:53:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4
00:27:05.436   13:53:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:05.436   13:53:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x
00:27:05.694  Malloc4
00:27:05.694   13:53:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:05.694   13:53:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4
00:27:05.694   13:53:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:05.694   13:53:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x
00:27:05.694   13:53:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:05.694   13:53:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420
00:27:05.694   13:53:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:05.694   13:53:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x
00:27:05.694   13:53:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:05.694   13:53:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420
00:27:06.630   13:53:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme4n1
00:27:06.630   13:53:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0
00:27:06.630   13:53:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME
00:27:06.630   13:53:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme4n1
00:27:06.630   13:53:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme4n1
00:27:06.630   13:53:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME
00:27:06.630   13:53:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0
00:27:06.630   13:53:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5)
00:27:06.630   13:53:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK00000000000005
00:27:06.630   13:53:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:06.630   13:53:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x
00:27:06.630   13:53:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:06.630   13:53:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5
00:27:06.630   13:53:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:06.630   13:53:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x
00:27:06.630  Malloc5
00:27:06.630   13:53:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:06.630   13:53:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5
00:27:06.630   13:53:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:06.630   13:53:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x
00:27:06.630   13:53:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:06.630   13:53:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420
00:27:06.630   13:53:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:06.630   13:53:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x
00:27:06.630   13:53:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:06.630   13:53:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420
00:27:07.565   13:53:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme5n1
00:27:07.565   13:53:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0
00:27:07.565   13:53:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME
00:27:07.565   13:53:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme5n1
00:27:07.823   13:53:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME
00:27:07.823   13:53:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme5n1
00:27:07.823   13:53:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0
00:27:07.823   13:53:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 1048576 -d 128 -t read -r 10 -n 13
00:27:07.823  [global]
00:27:07.823  thread=1
00:27:07.823  invalidate=1
00:27:07.823  rw=read
00:27:07.823  time_based=1
00:27:07.823  runtime=10
00:27:07.823  ioengine=libaio
00:27:07.823  direct=1
00:27:07.823  bs=1048576
00:27:07.823  iodepth=128
00:27:07.823  norandommap=1
00:27:07.823  numjobs=13
00:27:07.823  
00:27:07.823  [job0]
00:27:07.823  filename=/dev/nvme0n1
00:27:07.823  [job1]
00:27:07.823  filename=/dev/nvme1n1
00:27:07.823  [job2]
00:27:07.823  filename=/dev/nvme2n1
00:27:07.823  [job3]
00:27:07.823  filename=/dev/nvme3n1
00:27:07.823  [job4]
00:27:07.823  filename=/dev/nvme4n1
00:27:07.823  [job5]
00:27:07.823  filename=/dev/nvme5n1
00:27:07.823  Could not set queue depth (nvme0n1)
00:27:07.823  Could not set queue depth (nvme1n1)
00:27:07.823  Could not set queue depth (nvme2n1)
00:27:07.823  Could not set queue depth (nvme3n1)
00:27:07.823  Could not set queue depth (nvme4n1)
00:27:07.823  Could not set queue depth (nvme5n1)
00:27:08.081  job0: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128
00:27:08.081  ...
00:27:08.081  job1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128
00:27:08.081  ...
00:27:08.081  job2: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128
00:27:08.081  ...
00:27:08.081  job3: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128
00:27:08.081  ...
00:27:08.081  job4: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128
00:27:08.081  ...
00:27:08.081  job5: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128
00:27:08.081  ...
00:27:08.081  fio-3.35
00:27:08.081  Starting 78 threads
00:27:22.964  
00:27:22.964  job0: (groupid=0, jobs=1): err= 0: pid=3418504: Sat Dec 14 13:53:20 2024
00:27:22.964    read: IOPS=3, BW=3174KiB/s (3250kB/s)(38.0MiB/12260msec)
00:27:22.964      slat (msec): min=2, max=2113, avg=267.02, stdev=665.31
00:27:22.964      clat (msec): min=2112, max=12241, avg=9304.39, stdev=3249.19
00:27:22.964       lat (msec): min=4171, max=12259, avg=9571.41, stdev=3053.23
00:27:22.964      clat percentiles (msec):
00:27:22.964       |  1.00th=[ 2106],  5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 6342],
00:27:22.964       | 30.00th=[ 6409], 40.00th=[ 8490], 50.00th=[10671], 60.00th=[12013],
00:27:22.964       | 70.00th=[12013], 80.00th=[12013], 90.00th=[12147], 95.00th=[12281],
00:27:22.964       | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281],
00:27:22.964       | 99.99th=[12281]
00:27:22.964    lat (msec)   : >=2000=100.00%
00:27:22.964    cpu          : usr=0.00%, sys=0.33%, ctx=74, majf=0, minf=9729
00:27:22.964    IO depths    : 1=2.6%, 2=5.3%, 4=10.5%, 8=21.1%, 16=42.1%, 32=18.4%, >=64=0.0%
00:27:22.964       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.964       complete  : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0%
00:27:22.964       issued rwts: total=38,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.964       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.964  job0: (groupid=0, jobs=1): err= 0: pid=3418505: Sat Dec 14 13:53:20 2024
00:27:22.964    read: IOPS=25, BW=25.6MiB/s (26.8MB/s)(313MiB/12240msec)
00:27:22.964      slat (usec): min=33, max=2106.6k, avg=32349.35, stdev=211555.16
00:27:22.964      clat (msec): min=658, max=7812, avg=4745.72, stdev=2385.62
00:27:22.964       lat (msec): min=659, max=7813, avg=4778.07, stdev=2387.20
00:27:22.964      clat percentiles (msec):
00:27:22.964       |  1.00th=[  659],  5.00th=[  667], 10.00th=[ 1351], 20.00th=[ 3440],
00:27:22.964       | 30.00th=[ 3574], 40.00th=[ 3742], 50.00th=[ 3910], 60.00th=[ 5604],
00:27:22.964       | 70.00th=[ 7282], 80.00th=[ 7416], 90.00th=[ 7684], 95.00th=[ 7684],
00:27:22.964       | 99.00th=[ 7819], 99.50th=[ 7819], 99.90th=[ 7819], 99.95th=[ 7819],
00:27:22.964       | 99.99th=[ 7819]
00:27:22.964     bw (  KiB/s): min= 1503, max=180224, per=1.88%, avg=54340.43, stdev=69679.09, samples=7
00:27:22.964     iops        : min=    1, max=  176, avg=53.00, stdev=68.11, samples=7
00:27:22.964    lat (msec)   : 750=7.03%, 2000=10.54%, >=2000=82.43%
00:27:22.964    cpu          : usr=0.01%, sys=0.88%, ctx=421, majf=0, minf=32769
00:27:22.964    IO depths    : 1=0.3%, 2=0.6%, 4=1.3%, 8=2.6%, 16=5.1%, 32=10.2%, >=64=79.9%
00:27:22.964       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.964       complete  : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5%
00:27:22.964       issued rwts: total=313,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.964       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.964  job0: (groupid=0, jobs=1): err= 0: pid=3418506: Sat Dec 14 13:53:20 2024
00:27:22.964    read: IOPS=25, BW=25.7MiB/s (27.0MB/s)(314MiB/12202msec)
00:27:22.964      slat (usec): min=55, max=2152.6k, avg=32170.41, stdev=214290.16
00:27:22.964      clat (msec): min=550, max=10765, avg=4604.91, stdev=4479.50
00:27:22.964       lat (msec): min=553, max=10766, avg=4637.08, stdev=4487.00
00:27:22.964      clat percentiles (msec):
00:27:22.964       |  1.00th=[  550],  5.00th=[  584], 10.00th=[  617], 20.00th=[  735],
00:27:22.964       | 30.00th=[  894], 40.00th=[  969], 50.00th=[ 1133], 60.00th=[ 6342],
00:27:22.964       | 70.00th=[ 9731], 80.00th=[10134], 90.00th=[10402], 95.00th=[10671],
00:27:22.964       | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805],
00:27:22.964       | 99.99th=[10805]
00:27:22.964     bw (  KiB/s): min= 1590, max=157696, per=1.65%, avg=47784.88, stdev=59079.84, samples=8
00:27:22.964     iops        : min=    1, max=  154, avg=46.25, stdev=57.85, samples=8
00:27:22.964    lat (msec)   : 750=20.70%, 1000=21.02%, 2000=15.61%, >=2000=42.68%
00:27:22.964    cpu          : usr=0.02%, sys=0.97%, ctx=522, majf=0, minf=32769
00:27:22.964    IO depths    : 1=0.3%, 2=0.6%, 4=1.3%, 8=2.5%, 16=5.1%, 32=10.2%, >=64=79.9%
00:27:22.964       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.964       complete  : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5%
00:27:22.964       issued rwts: total=314,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.964       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.964  job0: (groupid=0, jobs=1): err= 0: pid=3418507: Sat Dec 14 13:53:20 2024
00:27:22.964    read: IOPS=9, BW=9851KiB/s (10.1MB/s)(117MiB/12162msec)
00:27:22.964      slat (usec): min=425, max=2087.3k, avg=85875.77, stdev=369576.23
00:27:22.964      clat (msec): min=2113, max=12120, avg=10571.29, stdev=1873.27
00:27:22.964       lat (msec): min=4175, max=12161, avg=10657.17, stdev=1704.92
00:27:22.964      clat percentiles (msec):
00:27:22.964       |  1.00th=[ 4178],  5.00th=[ 6342], 10.00th=[ 8490], 20.00th=[10537],
00:27:22.964       | 30.00th=[10805], 40.00th=[10939], 50.00th=[11073], 60.00th=[11208],
00:27:22.964       | 70.00th=[11342], 80.00th=[11610], 90.00th=[11879], 95.00th=[12013],
00:27:22.964       | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147],
00:27:22.964       | 99.99th=[12147]
00:27:22.964    lat (msec)   : >=2000=100.00%
00:27:22.964    cpu          : usr=0.02%, sys=0.77%, ctx=281, majf=0, minf=29953
00:27:22.965    IO depths    : 1=0.9%, 2=1.7%, 4=3.4%, 8=6.8%, 16=13.7%, 32=27.4%, >=64=46.2%
00:27:22.965       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.965       complete  : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0%
00:27:22.965       issued rwts: total=117,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.965       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.965  job0: (groupid=0, jobs=1): err= 0: pid=3418508: Sat Dec 14 13:53:20 2024
00:27:22.965    read: IOPS=10, BW=10.8MiB/s (11.3MB/s)(132MiB/12222msec)
00:27:22.965      slat (usec): min=479, max=2090.4k, avg=76579.46, stdev=342616.21
00:27:22.965      clat (msec): min=2112, max=12107, avg=10665.59, stdev=1656.26
00:27:22.965       lat (msec): min=4199, max=12109, avg=10742.16, stdev=1479.02
00:27:22.965      clat percentiles (msec):
00:27:22.965       |  1.00th=[ 4212],  5.00th=[ 6342], 10.00th=[ 8490], 20.00th=[10402],
00:27:22.965       | 30.00th=[10537], 40.00th=[10805], 50.00th=[11073], 60.00th=[11476],
00:27:22.965       | 70.00th=[11610], 80.00th=[11610], 90.00th=[11745], 95.00th=[11879],
00:27:22.965       | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147],
00:27:22.965       | 99.99th=[12147]
00:27:22.965     bw (  KiB/s): min= 1537, max= 4096, per=0.08%, avg=2431.00, stdev=1135.59, samples=4
00:27:22.965     iops        : min=    1, max=    4, avg= 2.00, stdev= 1.41, samples=4
00:27:22.965    lat (msec)   : >=2000=100.00%
00:27:22.965    cpu          : usr=0.00%, sys=0.65%, ctx=372, majf=0, minf=32769
00:27:22.965    IO depths    : 1=0.8%, 2=1.5%, 4=3.0%, 8=6.1%, 16=12.1%, 32=24.2%, >=64=52.3%
00:27:22.965       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.965       complete  : 0=0.0%, 4=83.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=16.7%
00:27:22.965       issued rwts: total=132,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.965       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.965  job0: (groupid=0, jobs=1): err= 0: pid=3418509: Sat Dec 14 13:53:20 2024
00:27:22.965    read: IOPS=30, BW=30.0MiB/s (31.5MB/s)(368MiB/12252msec)
00:27:22.965      slat (usec): min=57, max=2093.8k, avg=27565.69, stdev=186909.76
00:27:22.965      clat (msec): min=667, max=9167, avg=3911.59, stdev=3537.92
00:27:22.965       lat (msec): min=674, max=9174, avg=3939.16, stdev=3541.41
00:27:22.965      clat percentiles (msec):
00:27:22.965       |  1.00th=[  676],  5.00th=[  743], 10.00th=[  802], 20.00th=[ 1083],
00:27:22.965       | 30.00th=[ 1284], 40.00th=[ 1485], 50.00th=[ 1804], 60.00th=[ 1955],
00:27:22.965       | 70.00th=[ 8658], 80.00th=[ 8792], 90.00th=[ 8926], 95.00th=[ 9060],
00:27:22.965       | 99.00th=[ 9194], 99.50th=[ 9194], 99.90th=[ 9194], 99.95th=[ 9194],
00:27:22.965       | 99.99th=[ 9194]
00:27:22.965     bw (  KiB/s): min= 1503, max=178176, per=1.90%, avg=54777.11, stdev=64584.69, samples=9
00:27:22.965     iops        : min=    1, max=  174, avg=53.33, stdev=63.20, samples=9
00:27:22.965    lat (msec)   : 750=5.98%, 1000=11.96%, 2000=44.02%, >=2000=38.04%
00:27:22.965    cpu          : usr=0.02%, sys=0.91%, ctx=784, majf=0, minf=32769
00:27:22.965    IO depths    : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.2%, 16=4.3%, 32=8.7%, >=64=82.9%
00:27:22.965       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.965       complete  : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4%
00:27:22.965       issued rwts: total=368,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.965       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.965  job0: (groupid=0, jobs=1): err= 0: pid=3418510: Sat Dec 14 13:53:20 2024
00:27:22.965    read: IOPS=12, BW=12.8MiB/s (13.4MB/s)(157MiB/12300msec)
00:27:22.965      slat (usec): min=139, max=2110.5k, avg=64885.26, stdev=332067.56
00:27:22.965      clat (msec): min=791, max=12240, avg=9681.23, stdev=3767.35
00:27:22.965       lat (msec): min=794, max=12243, avg=9746.11, stdev=3722.75
00:27:22.965      clat percentiles (msec):
00:27:22.965       |  1.00th=[  793],  5.00th=[  852], 10.00th=[ 1636], 20.00th=[ 6342],
00:27:22.965       | 30.00th=[11476], 40.00th=[11476], 50.00th=[11610], 60.00th=[11745],
00:27:22.965       | 70.00th=[11879], 80.00th=[12013], 90.00th=[12147], 95.00th=[12147],
00:27:22.965       | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281],
00:27:22.965       | 99.99th=[12281]
00:27:22.965     bw (  KiB/s): min= 2052, max=26677, per=0.35%, avg=10246.67, stdev=8514.15, samples=6
00:27:22.965     iops        : min=    2, max=   26, avg= 9.83, stdev= 8.35, samples=6
00:27:22.965    lat (msec)   : 1000=8.28%, 2000=2.55%, >=2000=89.17%
00:27:22.965    cpu          : usr=0.01%, sys=0.87%, ctx=166, majf=0, minf=32769
00:27:22.965    IO depths    : 1=0.6%, 2=1.3%, 4=2.5%, 8=5.1%, 16=10.2%, 32=20.4%, >=64=59.9%
00:27:22.965       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.965       complete  : 0=0.0%, 4=96.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=3.2%
00:27:22.965       issued rwts: total=157,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.965       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.965  job0: (groupid=0, jobs=1): err= 0: pid=3418511: Sat Dec 14 13:53:20 2024
00:27:22.965    read: IOPS=78, BW=78.1MiB/s (81.9MB/s)(790MiB/10118msec)
00:27:22.965      slat (usec): min=42, max=4003.7k, avg=12654.55, stdev=142636.20
00:27:22.965      clat (msec): min=115, max=5925, avg=1489.79, stdev=1689.55
00:27:22.965       lat (msec): min=125, max=5930, avg=1502.45, stdev=1696.35
00:27:22.965      clat percentiles (msec):
00:27:22.965       |  1.00th=[  205],  5.00th=[  451], 10.00th=[  527], 20.00th=[  542],
00:27:22.965       | 30.00th=[  600], 40.00th=[  651], 50.00th=[  693], 60.00th=[  961],
00:27:22.965       | 70.00th=[ 1099], 80.00th=[ 1301], 90.00th=[ 5134], 95.00th=[ 5537],
00:27:22.965       | 99.00th=[ 5805], 99.50th=[ 5873], 99.90th=[ 5940], 99.95th=[ 5940],
00:27:22.965       | 99.99th=[ 5940]
00:27:22.965     bw (  KiB/s): min=16384, max=243712, per=4.73%, avg=136516.44, stdev=84640.92, samples=9
00:27:22.965     iops        : min=   16, max=  238, avg=133.22, stdev=82.73, samples=9
00:27:22.965    lat (msec)   : 250=2.28%, 500=3.92%, 750=45.44%, 1000=14.05%, 2000=18.23%
00:27:22.965    lat (msec)   : >=2000=16.08%
00:27:22.965    cpu          : usr=0.04%, sys=1.73%, ctx=941, majf=0, minf=32769
00:27:22.965    IO depths    : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.1%, >=64=92.0%
00:27:22.965       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.965       complete  : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2%
00:27:22.965       issued rwts: total=790,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.965       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.965  job0: (groupid=0, jobs=1): err= 0: pid=3418512: Sat Dec 14 13:53:20 2024
00:27:22.965    read: IOPS=15, BW=15.6MiB/s (16.4MB/s)(191MiB/12208msec)
00:27:22.965      slat (usec): min=598, max=2101.3k, avg=52848.28, stdev=274723.70
00:27:22.965      clat (msec): min=973, max=9954, avg=6733.37, stdev=3586.38
00:27:22.965       lat (msec): min=977, max=9956, avg=6786.22, stdev=3562.06
00:27:22.965      clat percentiles (msec):
00:27:22.965       |  1.00th=[  978],  5.00th=[ 1045], 10.00th=[ 1183], 20.00th=[ 1368],
00:27:22.965       | 30.00th=[ 3675], 40.00th=[ 8490], 50.00th=[ 9194], 60.00th=[ 9329],
00:27:22.965       | 70.00th=[ 9463], 80.00th=[ 9597], 90.00th=[ 9866], 95.00th=[ 9866],
00:27:22.965       | 99.00th=[10000], 99.50th=[10000], 99.90th=[10000], 99.95th=[10000],
00:27:22.965       | 99.99th=[10000]
00:27:22.965     bw (  KiB/s): min= 1563, max=67584, per=0.65%, avg=18653.29, stdev=25122.65, samples=7
00:27:22.965     iops        : min=    1, max=   66, avg=17.86, stdev=24.80, samples=7
00:27:22.965    lat (msec)   : 1000=2.62%, 2000=19.37%, >=2000=78.01%
00:27:22.965    cpu          : usr=0.01%, sys=0.70%, ctx=442, majf=0, minf=32769
00:27:22.965    IO depths    : 1=0.5%, 2=1.0%, 4=2.1%, 8=4.2%, 16=8.4%, 32=16.8%, >=64=67.0%
00:27:22.965       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.965       complete  : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.5%
00:27:22.965       issued rwts: total=191,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.965       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.965  job0: (groupid=0, jobs=1): err= 0: pid=3418513: Sat Dec 14 13:53:20 2024
00:27:22.965    read: IOPS=26, BW=26.5MiB/s (27.8MB/s)(323MiB/12198msec)
00:27:22.965      slat (usec): min=65, max=2074.7k, avg=31260.15, stdev=161707.94
00:27:22.965      clat (msec): min=729, max=9449, avg=4135.21, stdev=2622.45
00:27:22.965       lat (msec): min=731, max=9451, avg=4166.47, stdev=2629.90
00:27:22.965      clat percentiles (msec):
00:27:22.965       |  1.00th=[  735],  5.00th=[  802], 10.00th=[  844], 20.00th=[  978],
00:27:22.965       | 30.00th=[ 2400], 40.00th=[ 3138], 50.00th=[ 3742], 60.00th=[ 4178],
00:27:22.965       | 70.00th=[ 5805], 80.00th=[ 6544], 90.00th=[ 7886], 95.00th=[ 9329],
00:27:22.965       | 99.00th=[ 9329], 99.50th=[ 9463], 99.90th=[ 9463], 99.95th=[ 9463],
00:27:22.965       | 99.99th=[ 9463]
00:27:22.965     bw (  KiB/s): min= 1600, max=133120, per=1.26%, avg=36439.27, stdev=39118.65, samples=11
00:27:22.965     iops        : min=    1, max=  130, avg=35.36, stdev=38.28, samples=11
00:27:22.965    lat (msec)   : 750=3.10%, 1000=18.27%, 2000=3.41%, >=2000=75.23%
00:27:22.965    cpu          : usr=0.04%, sys=1.09%, ctx=601, majf=0, minf=32769
00:27:22.965    IO depths    : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.5%, 16=5.0%, 32=9.9%, >=64=80.5%
00:27:22.965       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.965       complete  : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5%
00:27:22.965       issued rwts: total=323,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.965       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.965  job0: (groupid=0, jobs=1): err= 0: pid=3418514: Sat Dec 14 13:53:20 2024
00:27:22.965    read: IOPS=27, BW=27.5MiB/s (28.9MB/s)(278MiB/10093msec)
00:27:22.965      slat (usec): min=55, max=2121.3k, avg=35971.26, stdev=152423.31
00:27:22.965      clat (msec): min=91, max=8260, avg=3002.24, stdev=2508.77
00:27:22.965       lat (msec): min=96, max=8270, avg=3038.21, stdev=2525.44
00:27:22.965      clat percentiles (msec):
00:27:22.965       |  1.00th=[   99],  5.00th=[  239], 10.00th=[  372], 20.00th=[  768],
00:27:22.965       | 30.00th=[ 1167], 40.00th=[ 1687], 50.00th=[ 2366], 60.00th=[ 3104],
00:27:22.965       | 70.00th=[ 3742], 80.00th=[ 4396], 90.00th=[ 7752], 95.00th=[ 8087],
00:27:22.965       | 99.00th=[ 8221], 99.50th=[ 8221], 99.90th=[ 8288], 99.95th=[ 8288],
00:27:22.965       | 99.99th=[ 8288]
00:27:22.965     bw (  KiB/s): min=22528, max=90112, per=1.78%, avg=51515.33, stdev=27612.07, samples=6
00:27:22.965     iops        : min=   22, max=   88, avg=50.17, stdev=26.81, samples=6
00:27:22.965    lat (msec)   : 100=1.08%, 250=4.32%, 500=7.91%, 750=6.47%, 1000=7.19%
00:27:22.965    lat (msec)   : 2000=17.27%, >=2000=55.76%
00:27:22.965    cpu          : usr=0.02%, sys=1.50%, ctx=807, majf=0, minf=32769
00:27:22.965    IO depths    : 1=0.4%, 2=0.7%, 4=1.4%, 8=2.9%, 16=5.8%, 32=11.5%, >=64=77.3%
00:27:22.965       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.965       complete  : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7%
00:27:22.965       issued rwts: total=278,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.965       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.965  job0: (groupid=0, jobs=1): err= 0: pid=3418515: Sat Dec 14 13:53:20 2024
00:27:22.965    read: IOPS=26, BW=26.4MiB/s (27.7MB/s)(270MiB/10212msec)
00:27:22.965      slat (usec): min=973, max=2113.3k, avg=37799.46, stdev=162808.71
00:27:22.965      clat (msec): min=4, max=6947, avg=3415.96, stdev=1686.12
00:27:22.965       lat (msec): min=1011, max=6954, avg=3453.76, stdev=1686.84
00:27:22.965      clat percentiles (msec):
00:27:22.965       |  1.00th=[ 1020],  5.00th=[ 1183], 10.00th=[ 1485], 20.00th=[ 2198],
00:27:22.965       | 30.00th=[ 2735], 40.00th=[ 2903], 50.00th=[ 3037], 60.00th=[ 3239],
00:27:22.965       | 70.00th=[ 3440], 80.00th=[ 3641], 90.00th=[ 6611], 95.00th=[ 6745],
00:27:22.965       | 99.00th=[ 6879], 99.50th=[ 6879], 99.90th=[ 6946], 99.95th=[ 6946],
00:27:22.966       | 99.99th=[ 6946]
00:27:22.966     bw (  KiB/s): min=26570, max=53248, per=1.44%, avg=41537.43, stdev=10369.35, samples=7
00:27:22.966     iops        : min=   25, max=   52, avg=40.43, stdev=10.36, samples=7
00:27:22.966    lat (msec)   : 10=0.37%, 2000=16.30%, >=2000=83.33%
00:27:22.966    cpu          : usr=0.02%, sys=1.44%, ctx=722, majf=0, minf=32769
00:27:22.966    IO depths    : 1=0.4%, 2=0.7%, 4=1.5%, 8=3.0%, 16=5.9%, 32=11.9%, >=64=76.7%
00:27:22.966       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.966       complete  : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7%
00:27:22.966       issued rwts: total=270,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.966       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.966  job0: (groupid=0, jobs=1): err= 0: pid=3418516: Sat Dec 14 13:53:20 2024
00:27:22.966    read: IOPS=30, BW=30.4MiB/s (31.8MB/s)(306MiB/10074msec)
00:27:22.966      slat (usec): min=120, max=2100.2k, avg=32678.30, stdev=142748.11
00:27:22.966      clat (msec): min=71, max=6427, avg=2264.07, stdev=1262.57
00:27:22.966       lat (msec): min=74, max=6435, avg=2296.75, stdev=1287.01
00:27:22.966      clat percentiles (msec):
00:27:22.966       |  1.00th=[   80],  5.00th=[  163], 10.00th=[  355], 20.00th=[  860],
00:27:22.966       | 30.00th=[ 1703], 40.00th=[ 2467], 50.00th=[ 2668], 60.00th=[ 2836],
00:27:22.966       | 70.00th=[ 2903], 80.00th=[ 3004], 90.00th=[ 3071], 95.00th=[ 3138],
00:27:22.966       | 99.00th=[ 6409], 99.50th=[ 6409], 99.90th=[ 6409], 99.95th=[ 6409],
00:27:22.966       | 99.99th=[ 6409]
00:27:22.966     bw (  KiB/s): min=14336, max=55296, per=1.36%, avg=39192.86, stdev=14010.22, samples=7
00:27:22.966     iops        : min=   14, max=   54, avg=38.14, stdev=13.67, samples=7
00:27:22.966    lat (msec)   : 100=2.29%, 250=4.58%, 500=5.88%, 750=4.90%, 1000=4.25%
00:27:22.966    lat (msec)   : 2000=11.44%, >=2000=66.67%
00:27:22.966    cpu          : usr=0.02%, sys=1.13%, ctx=798, majf=0, minf=32769
00:27:22.966    IO depths    : 1=0.3%, 2=0.7%, 4=1.3%, 8=2.6%, 16=5.2%, 32=10.5%, >=64=79.4%
00:27:22.966       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.966       complete  : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6%
00:27:22.966       issued rwts: total=306,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.966       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.966  job1: (groupid=0, jobs=1): err= 0: pid=3418519: Sat Dec 14 13:53:20 2024
00:27:22.966    read: IOPS=34, BW=34.6MiB/s (36.3MB/s)(354MiB/10232msec)
00:27:22.966      slat (usec): min=43, max=2090.8k, avg=28795.60, stdev=189712.61
00:27:22.966      clat (msec): min=35, max=8978, avg=3554.12, stdev=2695.60
00:27:22.966       lat (msec): min=631, max=8979, avg=3582.91, stdev=2702.39
00:27:22.966      clat percentiles (msec):
00:27:22.966       |  1.00th=[  625],  5.00th=[  642], 10.00th=[  676], 20.00th=[  751],
00:27:22.966       | 30.00th=[  810], 40.00th=[ 2198], 50.00th=[ 3742], 60.00th=[ 4396],
00:27:22.966       | 70.00th=[ 4665], 80.00th=[ 5671], 90.00th=[ 8792], 95.00th=[ 8926],
00:27:22.966       | 99.00th=[ 8926], 99.50th=[ 8926], 99.90th=[ 8926], 99.95th=[ 8926],
00:27:22.966       | 99.99th=[ 8926]
00:27:22.966     bw (  KiB/s): min= 4096, max=172032, per=1.78%, avg=51398.33, stdev=53914.61, samples=9
00:27:22.966     iops        : min=    4, max=  168, avg=50.00, stdev=52.61, samples=9
00:27:22.966    lat (msec)   : 50=0.28%, 750=19.77%, 1000=18.93%, >=2000=61.02%
00:27:22.966    cpu          : usr=0.03%, sys=1.59%, ctx=515, majf=0, minf=32770
00:27:22.966    IO depths    : 1=0.3%, 2=0.6%, 4=1.1%, 8=2.3%, 16=4.5%, 32=9.0%, >=64=82.2%
00:27:22.966       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.966       complete  : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4%
00:27:22.966       issued rwts: total=354,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.966       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.966  job1: (groupid=0, jobs=1): err= 0: pid=3418520: Sat Dec 14 13:53:20 2024
00:27:22.966    read: IOPS=30, BW=30.1MiB/s (31.6MB/s)(369MiB/12249msec)
00:27:22.966      slat (usec): min=37, max=2095.4k, avg=27472.62, stdev=159586.63
00:27:22.966      clat (msec): min=995, max=6920, avg=3487.62, stdev=994.74
00:27:22.966       lat (msec): min=1121, max=8373, avg=3515.09, stdev=1014.49
00:27:22.966      clat percentiles (msec):
00:27:22.966       |  1.00th=[ 1116],  5.00th=[ 1620], 10.00th=[ 2500], 20.00th=[ 2668],
00:27:22.966       | 30.00th=[ 3205], 40.00th=[ 3339], 50.00th=[ 3574], 60.00th=[ 3641],
00:27:22.966       | 70.00th=[ 3742], 80.00th=[ 4010], 90.00th=[ 4933], 95.00th=[ 5000],
00:27:22.966       | 99.00th=[ 5604], 99.50th=[ 6879], 99.90th=[ 6946], 99.95th=[ 6946],
00:27:22.966       | 99.99th=[ 6946]
00:27:22.966     bw (  KiB/s): min= 1503, max=131072, per=1.71%, avg=49507.10, stdev=42828.90, samples=10
00:27:22.966     iops        : min=    1, max=  128, avg=48.30, stdev=41.88, samples=10
00:27:22.966    lat (msec)   : 1000=0.27%, 2000=8.67%, >=2000=91.06%
00:27:22.966    cpu          : usr=0.02%, sys=1.08%, ctx=551, majf=0, minf=32769
00:27:22.966    IO depths    : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.2%, 16=4.3%, 32=8.7%, >=64=82.9%
00:27:22.966       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.966       complete  : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4%
00:27:22.966       issued rwts: total=369,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.966       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.966  job1: (groupid=0, jobs=1): err= 0: pid=3418521: Sat Dec 14 13:53:20 2024
00:27:22.966    read: IOPS=5, BW=5721KiB/s (5858kB/s)(68.0MiB/12171msec)
00:27:22.966      slat (usec): min=1020, max=4083.6k, avg=147943.30, stdev=582155.94
00:27:22.966      clat (msec): min=2109, max=12167, avg=7105.93, stdev=2528.94
00:27:22.966       lat (msec): min=4224, max=12170, avg=7253.88, stdev=2526.57
00:27:22.966      clat percentiles (msec):
00:27:22.966       |  1.00th=[ 2106],  5.00th=[ 5269], 10.00th=[ 5403], 20.00th=[ 5604],
00:27:22.966       | 30.00th=[ 5738], 40.00th=[ 5873], 50.00th=[ 6007], 60.00th=[ 6141],
00:27:22.966       | 70.00th=[ 6342], 80.00th=[10537], 90.00th=[12147], 95.00th=[12147],
00:27:22.966       | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147],
00:27:22.966       | 99.99th=[12147]
00:27:22.966    lat (msec)   : >=2000=100.00%
00:27:22.966    cpu          : usr=0.01%, sys=0.51%, ctx=249, majf=0, minf=17409
00:27:22.966    IO depths    : 1=1.5%, 2=2.9%, 4=5.9%, 8=11.8%, 16=23.5%, 32=47.1%, >=64=7.4%
00:27:22.966       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.966       complete  : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0%
00:27:22.966       issued rwts: total=68,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.966       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.966  job1: (groupid=0, jobs=1): err= 0: pid=3418522: Sat Dec 14 13:53:20 2024
00:27:22.966    read: IOPS=22, BW=22.3MiB/s (23.4MB/s)(273MiB/12221msec)
00:27:22.966      slat (usec): min=43, max=2090.0k, avg=37033.10, stdev=249511.34
00:27:22.966      clat (msec): min=661, max=11309, avg=5497.74, stdev=4950.88
00:27:22.966       lat (msec): min=662, max=11312, avg=5534.77, stdev=4956.11
00:27:22.966      clat percentiles (msec):
00:27:22.966       |  1.00th=[  667],  5.00th=[  693], 10.00th=[  709], 20.00th=[  743],
00:27:22.966       | 30.00th=[  751], 40.00th=[  802], 50.00th=[ 2106], 60.00th=[10805],
00:27:22.966       | 70.00th=[10805], 80.00th=[10939], 90.00th=[11073], 95.00th=[11208],
00:27:22.966       | 99.00th=[11342], 99.50th=[11342], 99.90th=[11342], 99.95th=[11342],
00:27:22.966       | 99.99th=[11342]
00:27:22.966     bw (  KiB/s): min= 1528, max=198656, per=1.29%, avg=37309.25, stdev=68820.74, samples=8
00:27:22.966     iops        : min=    1, max=  194, avg=36.12, stdev=67.39, samples=8
00:27:22.966    lat (msec)   : 750=28.21%, 1000=21.61%, >=2000=50.18%
00:27:22.966    cpu          : usr=0.00%, sys=0.80%, ctx=266, majf=0, minf=32769
00:27:22.966    IO depths    : 1=0.4%, 2=0.7%, 4=1.5%, 8=2.9%, 16=5.9%, 32=11.7%, >=64=76.9%
00:27:22.966       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.966       complete  : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7%
00:27:22.966       issued rwts: total=273,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.966       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.966  job1: (groupid=0, jobs=1): err= 0: pid=3418523: Sat Dec 14 13:53:20 2024
00:27:22.966    read: IOPS=84, BW=84.7MiB/s (88.8MB/s)(854MiB/10088msec)
00:27:22.966      slat (usec): min=53, max=2101.4k, avg=11702.78, stdev=86443.74
00:27:22.966      clat (msec): min=86, max=6580, avg=1159.42, stdev=1065.43
00:27:22.966       lat (msec): min=91, max=6736, avg=1171.12, stdev=1077.94
00:27:22.966      clat percentiles (msec):
00:27:22.966       |  1.00th=[  124],  5.00th=[  384], 10.00th=[  527], 20.00th=[  535],
00:27:22.966       | 30.00th=[  584], 40.00th=[  693], 50.00th=[  810], 60.00th=[  852],
00:27:22.966       | 70.00th=[  894], 80.00th=[  995], 90.00th=[ 3440], 95.00th=[ 3608],
00:27:22.966       | 99.00th=[ 3708], 99.50th=[ 3742], 99.90th=[ 6611], 99.95th=[ 6611],
00:27:22.966       | 99.99th=[ 6611]
00:27:22.966     bw (  KiB/s): min=28614, max=247808, per=4.71%, avg=136176.30, stdev=76426.92, samples=10
00:27:22.966     iops        : min=   27, max=  242, avg=132.80, stdev=74.90, samples=10
00:27:22.966    lat (msec)   : 100=0.47%, 250=2.46%, 500=3.75%, 750=38.64%, 1000=34.89%
00:27:22.966    lat (msec)   : 2000=4.92%, >=2000=14.87%
00:27:22.966    cpu          : usr=0.05%, sys=1.88%, ctx=962, majf=0, minf=32769
00:27:22.966    IO depths    : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.9%, 32=3.7%, >=64=92.6%
00:27:22.966       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.966       complete  : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:27:22.966       issued rwts: total=854,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.966       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.966  job1: (groupid=0, jobs=1): err= 0: pid=3418524: Sat Dec 14 13:53:20 2024
00:27:22.966    read: IOPS=2, BW=2605KiB/s (2668kB/s)(31.0MiB/12184msec)
00:27:22.966      slat (usec): min=733, max=2137.6k, avg=324976.54, stdev=735238.83
00:27:22.966      clat (msec): min=2109, max=12183, avg=10292.73, stdev=2782.42
00:27:22.966       lat (msec): min=4178, max=12183, avg=10617.70, stdev=2349.42
00:27:22.966      clat percentiles (msec):
00:27:22.966       |  1.00th=[ 2106],  5.00th=[ 4178], 10.00th=[ 6342], 20.00th=[ 8490],
00:27:22.966       | 30.00th=[10671], 40.00th=[10671], 50.00th=[12013], 60.00th=[12147],
00:27:22.966       | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147],
00:27:22.966       | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147],
00:27:22.966       | 99.99th=[12147]
00:27:22.966    lat (msec)   : >=2000=100.00%
00:27:22.966    cpu          : usr=0.00%, sys=0.17%, ctx=60, majf=0, minf=7937
00:27:22.966    IO depths    : 1=3.2%, 2=6.5%, 4=12.9%, 8=25.8%, 16=51.6%, 32=0.0%, >=64=0.0%
00:27:22.966       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.966       complete  : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0%
00:27:22.966       issued rwts: total=31,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.966       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.966  job1: (groupid=0, jobs=1): err= 0: pid=3418525: Sat Dec 14 13:53:20 2024
00:27:22.966    read: IOPS=25, BW=25.3MiB/s (26.5MB/s)(309MiB/12226msec)
00:27:22.966      slat (usec): min=433, max=2067.4k, avg=32724.67, stdev=196127.01
00:27:22.966      clat (msec): min=1363, max=6107, avg=4697.32, stdev=1515.32
00:27:22.966       lat (msec): min=1387, max=6121, avg=4730.05, stdev=1499.39
00:27:22.966      clat percentiles (msec):
00:27:22.966       |  1.00th=[ 1469],  5.00th=[ 1552], 10.00th=[ 1653], 20.00th=[ 4245],
00:27:22.966       | 30.00th=[ 4530], 40.00th=[ 4799], 50.00th=[ 5067], 60.00th=[ 5671],
00:27:22.966       | 70.00th=[ 5873], 80.00th=[ 6007], 90.00th=[ 6007], 95.00th=[ 6007],
00:27:22.966       | 99.00th=[ 6007], 99.50th=[ 6074], 99.90th=[ 6141], 99.95th=[ 6141],
00:27:22.967       | 99.99th=[ 6141]
00:27:22.967     bw (  KiB/s): min= 1523, max=114688, per=1.43%, avg=41354.33, stdev=43225.95, samples=9
00:27:22.967     iops        : min=    1, max=  112, avg=40.11, stdev=42.48, samples=9
00:27:22.967    lat (msec)   : 2000=16.50%, >=2000=83.50%
00:27:22.967    cpu          : usr=0.02%, sys=1.19%, ctx=680, majf=0, minf=32769
00:27:22.967    IO depths    : 1=0.3%, 2=0.6%, 4=1.3%, 8=2.6%, 16=5.2%, 32=10.4%, >=64=79.6%
00:27:22.967       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.967       complete  : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5%
00:27:22.967       issued rwts: total=309,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.967       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.967  job1: (groupid=0, jobs=1): err= 0: pid=3418526: Sat Dec 14 13:53:20 2024
00:27:22.967    read: IOPS=137, BW=137MiB/s (144MB/s)(1381MiB/10057msec)
00:27:22.967      slat (usec): min=41, max=2126.4k, avg=7236.67, stdev=60184.48
00:27:22.967      clat (msec): min=53, max=5055, avg=759.10, stdev=808.06
00:27:22.967       lat (msec): min=56, max=5594, avg=766.33, stdev=818.75
00:27:22.967      clat percentiles (msec):
00:27:22.967       |  1.00th=[  103],  5.00th=[  284], 10.00th=[  393], 20.00th=[  426],
00:27:22.967       | 30.00th=[  435], 40.00th=[  477], 50.00th=[  527], 60.00th=[  625],
00:27:22.967       | 70.00th=[  760], 80.00th=[  818], 90.00th=[  894], 95.00th=[ 3540],
00:27:22.967       | 99.00th=[ 4077], 99.50th=[ 4077], 99.90th=[ 5067], 99.95th=[ 5067],
00:27:22.967       | 99.99th=[ 5067]
00:27:22.967     bw (  KiB/s): min=137216, max=323584, per=7.41%, avg=213939.33, stdev=69125.97, samples=12
00:27:22.967     iops        : min=  134, max=  316, avg=208.83, stdev=67.51, samples=12
00:27:22.967    lat (msec)   : 100=0.94%, 250=3.33%, 500=39.75%, 750=25.85%, 1000=24.26%
00:27:22.967    lat (msec)   : >=2000=5.87%
00:27:22.967    cpu          : usr=0.08%, sys=2.29%, ctx=1323, majf=0, minf=32769
00:27:22.967    IO depths    : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.3%, >=64=95.4%
00:27:22.967       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.967       complete  : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:27:22.967       issued rwts: total=1381,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.967       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.967  job1: (groupid=0, jobs=1): err= 0: pid=3418527: Sat Dec 14 13:53:20 2024
00:27:22.967    read: IOPS=198, BW=199MiB/s (209MB/s)(2426MiB/12200msec)
00:27:22.967      slat (usec): min=46, max=2034.4k, avg=4148.85, stdev=58646.62
00:27:22.967      clat (msec): min=130, max=6555, avg=591.32, stdev=1373.72
00:27:22.967       lat (msec): min=131, max=6557, avg=595.47, stdev=1378.63
00:27:22.967      clat percentiles (msec):
00:27:22.967       |  1.00th=[  132],  5.00th=[  133], 10.00th=[  133], 20.00th=[  133],
00:27:22.967       | 30.00th=[  134], 40.00th=[  134], 50.00th=[  134], 60.00th=[  136],
00:27:22.967       | 70.00th=[  342], 80.00th=[  542], 90.00th=[  802], 95.00th=[ 4212],
00:27:22.967       | 99.00th=[ 6544], 99.50th=[ 6544], 99.90th=[ 6544], 99.95th=[ 6544],
00:27:22.967       | 99.99th=[ 6544]
00:27:22.967     bw (  KiB/s): min= 1575, max=978944, per=13.58%, avg=392226.00, stdev=380914.49, samples=12
00:27:22.967     iops        : min=    1, max=  956, avg=382.83, stdev=372.16, samples=12
00:27:22.967    lat (msec)   : 250=69.54%, 500=7.42%, 750=12.41%, 1000=2.68%, 2000=2.23%
00:27:22.967    lat (msec)   : >=2000=5.73%
00:27:22.967    cpu          : usr=0.04%, sys=1.86%, ctx=2543, majf=0, minf=32769
00:27:22.967    IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.3%, >=64=97.4%
00:27:22.967       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.967       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:27:22.967       issued rwts: total=2426,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.967       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.967  job1: (groupid=0, jobs=1): err= 0: pid=3418528: Sat Dec 14 13:53:20 2024
00:27:22.967    read: IOPS=67, BW=67.3MiB/s (70.6MB/s)(677MiB/10062msec)
00:27:22.967      slat (usec): min=44, max=2062.1k, avg=14834.03, stdev=108525.99
00:27:22.967      clat (msec): min=14, max=5848, avg=1615.51, stdev=1744.23
00:27:22.967       lat (msec): min=72, max=5872, avg=1630.34, stdev=1752.19
00:27:22.967      clat percentiles (msec):
00:27:22.967       |  1.00th=[   94],  5.00th=[  321], 10.00th=[  575], 20.00th=[  844],
00:27:22.967       | 30.00th=[  860], 40.00th=[  894], 50.00th=[  927], 60.00th=[  961],
00:27:22.967       | 70.00th=[  986], 80.00th=[ 1070], 90.00th=[ 5537], 95.00th=[ 5671],
00:27:22.967       | 99.00th=[ 5873], 99.50th=[ 5873], 99.90th=[ 5873], 99.95th=[ 5873],
00:27:22.967       | 99.99th=[ 5873]
00:27:22.967     bw (  KiB/s): min=32768, max=157696, per=4.31%, avg=124601.38, stdev=41959.71, samples=8
00:27:22.967     iops        : min=   32, max=  154, avg=121.50, stdev=40.90, samples=8
00:27:22.967    lat (msec)   : 20=0.15%, 100=1.33%, 250=3.25%, 500=4.58%, 750=4.43%
00:27:22.967    lat (msec)   : 1000=58.79%, 2000=10.34%, >=2000=17.13%
00:27:22.967    cpu          : usr=0.07%, sys=1.67%, ctx=937, majf=0, minf=32769
00:27:22.967    IO depths    : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.7%, >=64=90.7%
00:27:22.967       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.967       complete  : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2%
00:27:22.967       issued rwts: total=677,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.967       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.967  job1: (groupid=0, jobs=1): err= 0: pid=3418529: Sat Dec 14 13:53:20 2024
00:27:22.967    read: IOPS=20, BW=20.8MiB/s (21.8MB/s)(253MiB/12169msec)
00:27:22.967      slat (usec): min=36, max=2014.3k, avg=39753.38, stdev=194011.96
00:27:22.967      clat (msec): min=2109, max=7917, avg=5102.35, stdev=1365.21
00:27:22.967       lat (msec): min=2974, max=7922, avg=5142.11, stdev=1347.42
00:27:22.967      clat percentiles (msec):
00:27:22.967       |  1.00th=[ 2970],  5.00th=[ 3171], 10.00th=[ 3574], 20.00th=[ 3910],
00:27:22.967       | 30.00th=[ 4111], 40.00th=[ 4279], 50.00th=[ 4463], 60.00th=[ 5671],
00:27:22.967       | 70.00th=[ 6074], 80.00th=[ 6477], 90.00th=[ 7013], 95.00th=[ 7483],
00:27:22.967       | 99.00th=[ 7752], 99.50th=[ 7886], 99.90th=[ 7886], 99.95th=[ 7886],
00:27:22.967       | 99.99th=[ 7886]
00:27:22.967     bw (  KiB/s): min= 1659, max=61440, per=0.99%, avg=28627.33, stdev=22564.16, samples=9
00:27:22.967     iops        : min=    1, max=   60, avg=27.78, stdev=22.25, samples=9
00:27:22.967    lat (msec)   : >=2000=100.00%
00:27:22.967    cpu          : usr=0.01%, sys=0.97%, ctx=465, majf=0, minf=32769
00:27:22.967    IO depths    : 1=0.4%, 2=0.8%, 4=1.6%, 8=3.2%, 16=6.3%, 32=12.6%, >=64=75.1%
00:27:22.967       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.967       complete  : 0=0.0%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.8%
00:27:22.967       issued rwts: total=253,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.967       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.967  job1: (groupid=0, jobs=1): err= 0: pid=3418530: Sat Dec 14 13:53:20 2024
00:27:22.967    read: IOPS=14, BW=14.3MiB/s (15.0MB/s)(175MiB/12254msec)
00:27:22.967      slat (usec): min=482, max=2125.4k, avg=57962.94, stdev=256407.23
00:27:22.967      clat (msec): min=2109, max=7735, avg=5676.84, stdev=1591.79
00:27:22.967       lat (msec): min=2444, max=7749, avg=5734.80, stdev=1551.19
00:27:22.967      clat percentiles (msec):
00:27:22.967       |  1.00th=[ 2433],  5.00th=[ 2534], 10.00th=[ 2567], 20.00th=[ 4732],
00:27:22.967       | 30.00th=[ 5537], 40.00th=[ 5873], 50.00th=[ 6141], 60.00th=[ 6208],
00:27:22.967       | 70.00th=[ 6409], 80.00th=[ 6946], 90.00th=[ 7349], 95.00th=[ 7483],
00:27:22.967       | 99.00th=[ 7617], 99.50th=[ 7752], 99.90th=[ 7752], 99.95th=[ 7752],
00:27:22.967       | 99.99th=[ 7752]
00:27:22.967     bw (  KiB/s): min= 1503, max=49152, per=0.85%, avg=24439.75, stdev=26224.73, samples=4
00:27:22.967     iops        : min=    1, max=   48, avg=23.75, stdev=25.75, samples=4
00:27:22.967    lat (msec)   : >=2000=100.00%
00:27:22.967    cpu          : usr=0.02%, sys=0.68%, ctx=389, majf=0, minf=32769
00:27:22.967    IO depths    : 1=0.6%, 2=1.1%, 4=2.3%, 8=4.6%, 16=9.1%, 32=18.3%, >=64=64.0%
00:27:22.967       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.967       complete  : 0=0.0%, 4=98.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.0%
00:27:22.967       issued rwts: total=175,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.967       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.967  job1: (groupid=0, jobs=1): err= 0: pid=3418531: Sat Dec 14 13:53:20 2024
00:27:22.967    read: IOPS=12, BW=12.7MiB/s (13.3MB/s)(155MiB/12203msec)
00:27:22.967      slat (usec): min=634, max=2086.5k, avg=65099.29, stdev=280306.76
00:27:22.967      clat (msec): min=2106, max=11168, avg=8747.53, stdev=2851.01
00:27:22.967       lat (msec): min=2176, max=11230, avg=8812.62, stdev=2794.60
00:27:22.967      clat percentiles (msec):
00:27:22.967       |  1.00th=[ 2106],  5.00th=[ 2265], 10.00th=[ 3171], 20.00th=[ 6409],
00:27:22.967       | 30.00th=[ 9597], 40.00th=[ 9866], 50.00th=[10000], 60.00th=[10134],
00:27:22.967       | 70.00th=[10402], 80.00th=[10537], 90.00th=[10939], 95.00th=[11073],
00:27:22.967       | 99.00th=[11208], 99.50th=[11208], 99.90th=[11208], 99.95th=[11208],
00:27:22.967       | 99.99th=[11208]
00:27:22.967     bw (  KiB/s): min= 1575, max=20480, per=0.25%, avg=7103.00, stdev=7223.40, samples=8
00:27:22.967     iops        : min=    1, max=   20, avg= 6.50, stdev= 7.11, samples=8
00:27:22.967    lat (msec)   : >=2000=100.00%
00:27:22.967    cpu          : usr=0.02%, sys=0.78%, ctx=585, majf=0, minf=32769
00:27:22.967    IO depths    : 1=0.6%, 2=1.3%, 4=2.6%, 8=5.2%, 16=10.3%, 32=20.6%, >=64=59.4%
00:27:22.967       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.967       complete  : 0=0.0%, 4=96.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=3.4%
00:27:22.967       issued rwts: total=155,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.967       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.967  job2: (groupid=0, jobs=1): err= 0: pid=3418532: Sat Dec 14 13:53:20 2024
00:27:22.967    read: IOPS=53, BW=53.2MiB/s (55.8MB/s)(538MiB/10105msec)
00:27:22.967      slat (usec): min=71, max=190203, avg=18606.40, stdev=25627.22
00:27:22.967      clat (msec): min=91, max=4391, avg=2255.40, stdev=1189.51
00:27:22.967       lat (msec): min=116, max=4415, avg=2274.01, stdev=1194.58
00:27:22.967      clat percentiles (msec):
00:27:22.967       |  1.00th=[  167],  5.00th=[  531], 10.00th=[  969], 20.00th=[ 1133],
00:27:22.967       | 30.00th=[ 1284], 40.00th=[ 1687], 50.00th=[ 2140], 60.00th=[ 2534],
00:27:22.967       | 70.00th=[ 2836], 80.00th=[ 3507], 90.00th=[ 4111], 95.00th=[ 4245],
00:27:22.967       | 99.00th=[ 4329], 99.50th=[ 4396], 99.90th=[ 4396], 99.95th=[ 4396],
00:27:22.967       | 99.99th=[ 4396]
00:27:22.967     bw (  KiB/s): min=14307, max=116736, per=1.62%, avg=46750.78, stdev=24318.62, samples=18
00:27:22.967     iops        : min=   13, max=  114, avg=45.50, stdev=23.83, samples=18
00:27:22.967    lat (msec)   : 100=0.19%, 250=1.67%, 500=2.42%, 750=3.16%, 1000=4.46%
00:27:22.967    lat (msec)   : 2000=36.25%, >=2000=51.86%
00:27:22.967    cpu          : usr=0.03%, sys=1.57%, ctx=1377, majf=0, minf=32769
00:27:22.967    IO depths    : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=3.0%, 32=5.9%, >=64=88.3%
00:27:22.967       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.967       complete  : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2%
00:27:22.967       issued rwts: total=538,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.967       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.968  job2: (groupid=0, jobs=1): err= 0: pid=3418533: Sat Dec 14 13:53:20 2024
00:27:22.968    read: IOPS=3, BW=4017KiB/s (4113kB/s)(48.0MiB/12237msec)
00:27:22.968      slat (usec): min=1860, max=2087.8k, avg=211034.31, stdev=572335.70
00:27:22.968      clat (msec): min=2105, max=12212, avg=7299.32, stdev=3412.53
00:27:22.968       lat (msec): min=3799, max=12235, avg=7510.36, stdev=3397.72
00:27:22.968      clat percentiles (msec):
00:27:22.968       |  1.00th=[ 2106],  5.00th=[ 3809], 10.00th=[ 3842], 20.00th=[ 4044],
00:27:22.968       | 30.00th=[ 4077], 40.00th=[ 4279], 50.00th=[ 6409], 60.00th=[ 8490],
00:27:22.968       | 70.00th=[10671], 80.00th=[12013], 90.00th=[12147], 95.00th=[12147],
00:27:22.968       | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147],
00:27:22.968       | 99.99th=[12147]
00:27:22.968    lat (msec)   : >=2000=100.00%
00:27:22.968    cpu          : usr=0.00%, sys=0.35%, ctx=122, majf=0, minf=12289
00:27:22.968    IO depths    : 1=2.1%, 2=4.2%, 4=8.3%, 8=16.7%, 16=33.3%, 32=35.4%, >=64=0.0%
00:27:22.968       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.968       complete  : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0%
00:27:22.968       issued rwts: total=48,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.968       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.968  job2: (groupid=0, jobs=1): err= 0: pid=3418534: Sat Dec 14 13:53:20 2024
00:27:22.968    read: IOPS=3, BW=3423KiB/s (3505kB/s)(41.0MiB/12267msec)
00:27:22.968      slat (usec): min=1089, max=2095.8k, avg=247294.60, stdev=639447.77
00:27:22.968      clat (msec): min=2127, max=12262, avg=9455.06, stdev=3320.01
00:27:22.968       lat (msec): min=4164, max=12266, avg=9702.36, stdev=3132.89
00:27:22.968      clat percentiles (msec):
00:27:22.968       |  1.00th=[ 2123],  5.00th=[ 4178], 10.00th=[ 4245], 20.00th=[ 6342],
00:27:22.968       | 30.00th=[ 6409], 40.00th=[ 8557], 50.00th=[12013], 60.00th=[12147],
00:27:22.968       | 70.00th=[12281], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281],
00:27:22.968       | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281],
00:27:22.968       | 99.99th=[12281]
00:27:22.968    lat (msec)   : >=2000=100.00%
00:27:22.968    cpu          : usr=0.00%, sys=0.38%, ctx=82, majf=0, minf=10497
00:27:22.968    IO depths    : 1=2.4%, 2=4.9%, 4=9.8%, 8=19.5%, 16=39.0%, 32=24.4%, >=64=0.0%
00:27:22.968       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.968       complete  : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0%
00:27:22.968       issued rwts: total=41,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.968       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.968  job2: (groupid=0, jobs=1): err= 0: pid=3418535: Sat Dec 14 13:53:20 2024
00:27:22.968    read: IOPS=28, BW=28.1MiB/s (29.5MB/s)(343MiB/12201msec)
00:27:22.968      slat (usec): min=446, max=1944.3k, avg=29424.60, stdev=132931.15
00:27:22.968      clat (msec): min=1751, max=8559, avg=4206.63, stdev=1645.48
00:27:22.968       lat (msec): min=1768, max=8566, avg=4236.06, stdev=1654.15
00:27:22.968      clat percentiles (msec):
00:27:22.968       |  1.00th=[ 1770],  5.00th=[ 2005], 10.00th=[ 2165], 20.00th=[ 2333],
00:27:22.968       | 30.00th=[ 2735], 40.00th=[ 4245], 50.00th=[ 4396], 60.00th=[ 4530],
00:27:22.968       | 70.00th=[ 4799], 80.00th=[ 5134], 90.00th=[ 6141], 95.00th=[ 8020],
00:27:22.968       | 99.00th=[ 8356], 99.50th=[ 8423], 99.90th=[ 8557], 99.95th=[ 8557],
00:27:22.968       | 99.99th=[ 8557]
00:27:22.968     bw (  KiB/s): min= 1575, max=106496, per=1.18%, avg=33979.23, stdev=26978.95, samples=13
00:27:22.968     iops        : min=    1, max=  104, avg=32.92, stdev=26.47, samples=13
00:27:22.968    lat (msec)   : 2000=4.96%, >=2000=95.04%
00:27:22.968    cpu          : usr=0.01%, sys=0.82%, ctx=1091, majf=0, minf=32769
00:27:22.968    IO depths    : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.3%, 16=4.7%, 32=9.3%, >=64=81.6%
00:27:22.968       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.968       complete  : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5%
00:27:22.968       issued rwts: total=343,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.968       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.968  job2: (groupid=0, jobs=1): err= 0: pid=3418536: Sat Dec 14 13:53:20 2024
00:27:22.968    read: IOPS=71, BW=71.1MiB/s (74.6MB/s)(717MiB/10082msec)
00:27:22.968      slat (usec): min=49, max=204700, avg=14045.15, stdev=20507.29
00:27:22.968      clat (msec): min=7, max=3145, avg=1604.59, stdev=852.03
00:27:22.968       lat (msec): min=211, max=3158, avg=1618.64, stdev=855.15
00:27:22.968      clat percentiles (msec):
00:27:22.968       |  1.00th=[  300],  5.00th=[  600], 10.00th=[  651], 20.00th=[  726],
00:27:22.968       | 30.00th=[  827], 40.00th=[ 1070], 50.00th=[ 1485], 60.00th=[ 2005],
00:27:22.968       | 70.00th=[ 2232], 80.00th=[ 2433], 90.00th=[ 2903], 95.00th=[ 3037],
00:27:22.968       | 99.00th=[ 3104], 99.50th=[ 3138], 99.90th=[ 3138], 99.95th=[ 3138],
00:27:22.968       | 99.99th=[ 3138]
00:27:22.968     bw (  KiB/s): min=28672, max=227328, per=2.69%, avg=77797.47, stdev=63861.10, samples=15
00:27:22.968     iops        : min=   28, max=  222, avg=75.87, stdev=62.33, samples=15
00:27:22.968    lat (msec)   : 10=0.14%, 250=0.42%, 500=1.81%, 750=19.53%, 1000=14.92%
00:27:22.968    lat (msec)   : 2000=23.29%, >=2000=39.89%
00:27:22.968    cpu          : usr=0.02%, sys=1.25%, ctx=1771, majf=0, minf=32769
00:27:22.968    IO depths    : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.2%, 32=4.5%, >=64=91.2%
00:27:22.968       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.968       complete  : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2%
00:27:22.968       issued rwts: total=717,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.968       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.968  job2: (groupid=0, jobs=1): err= 0: pid=3418537: Sat Dec 14 13:53:20 2024
00:27:22.968    read: IOPS=80, BW=80.2MiB/s (84.1MB/s)(807MiB/10064msec)
00:27:22.968      slat (usec): min=42, max=2089.0k, avg=12386.62, stdev=89391.99
00:27:22.968      clat (msec): min=61, max=4672, avg=1072.21, stdev=645.30
00:27:22.968       lat (msec): min=66, max=4678, avg=1084.60, stdev=657.23
00:27:22.968      clat percentiles (msec):
00:27:22.968       |  1.00th=[  138],  5.00th=[  317], 10.00th=[  625], 20.00th=[  944],
00:27:22.968       | 30.00th=[ 1003], 40.00th=[ 1020], 50.00th=[ 1045], 60.00th=[ 1053],
00:27:22.968       | 70.00th=[ 1070], 80.00th=[ 1099], 90.00th=[ 1133], 95.00th=[ 1167],
00:27:22.968       | 99.00th=[ 4597], 99.50th=[ 4597], 99.90th=[ 4665], 99.95th=[ 4665],
00:27:22.968       | 99.99th=[ 4665]
00:27:22.968     bw (  KiB/s): min=108544, max=129024, per=4.27%, avg=123289.60, stdev=5856.63, samples=10
00:27:22.968     iops        : min=  106, max=  126, avg=120.40, stdev= 5.72, samples=10
00:27:22.968    lat (msec)   : 100=0.25%, 250=3.84%, 500=3.72%, 750=5.33%, 1000=17.97%
00:27:22.968    lat (msec)   : 2000=65.06%, >=2000=3.84%
00:27:22.968    cpu          : usr=0.06%, sys=2.05%, ctx=717, majf=0, minf=32769
00:27:22.968    IO depths    : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.0%, >=64=92.2%
00:27:22.968       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.968       complete  : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:27:22.968       issued rwts: total=807,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.968       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.968  job2: (groupid=0, jobs=1): err= 0: pid=3418538: Sat Dec 14 13:53:20 2024
00:27:22.968    read: IOPS=67, BW=67.0MiB/s (70.3MB/s)(675MiB/10072msec)
00:27:22.968      slat (usec): min=42, max=110250, avg=14836.79, stdev=20100.04
00:27:22.968      clat (msec): min=52, max=3376, avg=1765.72, stdev=790.41
00:27:22.968       lat (msec): min=75, max=3400, avg=1780.56, stdev=794.18
00:27:22.968      clat percentiles (msec):
00:27:22.968       |  1.00th=[  126],  5.00th=[  313], 10.00th=[  592], 20.00th=[ 1083],
00:27:22.968       | 30.00th=[ 1519], 40.00th=[ 1703], 50.00th=[ 1787], 60.00th=[ 1854],
00:27:22.968       | 70.00th=[ 1972], 80.00th=[ 2534], 90.00th=[ 2937], 95.00th=[ 3037],
00:27:22.968       | 99.00th=[ 3272], 99.50th=[ 3306], 99.90th=[ 3373], 99.95th=[ 3373],
00:27:22.968       | 99.99th=[ 3373]
00:27:22.968     bw (  KiB/s): min=32768, max=131072, per=2.28%, avg=65871.35, stdev=29390.89, samples=17
00:27:22.968     iops        : min=   32, max=  128, avg=64.18, stdev=28.67, samples=17
00:27:22.968    lat (msec)   : 100=0.74%, 250=2.96%, 500=4.74%, 750=5.33%, 1000=3.85%
00:27:22.968    lat (msec)   : 2000=53.33%, >=2000=29.04%
00:27:22.968    cpu          : usr=0.07%, sys=1.85%, ctx=1571, majf=0, minf=32769
00:27:22.968    IO depths    : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.7%, >=64=90.7%
00:27:22.968       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.968       complete  : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2%
00:27:22.968       issued rwts: total=675,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.968       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.968  job2: (groupid=0, jobs=1): err= 0: pid=3418539: Sat Dec 14 13:53:20 2024
00:27:22.968    read: IOPS=29, BW=29.2MiB/s (30.6MB/s)(295MiB/10109msec)
00:27:22.968      slat (usec): min=490, max=1935.0k, avg=34181.63, stdev=154092.70
00:27:22.968      clat (msec): min=23, max=8513, avg=3425.93, stdev=1609.94
00:27:22.968       lat (msec): min=1123, max=8518, avg=3460.11, stdev=1618.79
00:27:22.968      clat percentiles (msec):
00:27:22.968       |  1.00th=[ 1133],  5.00th=[ 1351], 10.00th=[ 1485], 20.00th=[ 1871],
00:27:22.968       | 30.00th=[ 2072], 40.00th=[ 2601], 50.00th=[ 2869], 60.00th=[ 4396],
00:27:22.968       | 70.00th=[ 4933], 80.00th=[ 5067], 90.00th=[ 5134], 95.00th=[ 5873],
00:27:22.968       | 99.00th=[ 7080], 99.50th=[ 8423], 99.90th=[ 8490], 99.95th=[ 8490],
00:27:22.968       | 99.99th=[ 8490]
00:27:22.968     bw (  KiB/s): min=12263, max=81920, per=1.31%, avg=37980.78, stdev=21056.22, samples=9
00:27:22.968     iops        : min=   11, max=   80, avg=36.78, stdev=20.69, samples=9
00:27:22.968    lat (msec)   : 50=0.34%, 2000=28.14%, >=2000=71.53%
00:27:22.968    cpu          : usr=0.00%, sys=1.01%, ctx=925, majf=0, minf=32769
00:27:22.968    IO depths    : 1=0.3%, 2=0.7%, 4=1.4%, 8=2.7%, 16=5.4%, 32=10.8%, >=64=78.6%
00:27:22.968       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.968       complete  : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6%
00:27:22.968       issued rwts: total=295,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.968       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.968  job2: (groupid=0, jobs=1): err= 0: pid=3418540: Sat Dec 14 13:53:20 2024
00:27:22.968    read: IOPS=35, BW=35.9MiB/s (37.6MB/s)(360MiB/10035msec)
00:27:22.968      slat (usec): min=74, max=1664.6k, avg=27777.16, stdev=114788.64
00:27:22.968      clat (msec): min=33, max=8355, avg=2564.21, stdev=1710.07
00:27:22.968       lat (msec): min=35, max=8416, avg=2591.99, stdev=1726.05
00:27:22.968      clat percentiles (msec):
00:27:22.968       |  1.00th=[   39],  5.00th=[   68], 10.00th=[  121], 20.00th=[  542],
00:27:22.968       | 30.00th=[ 1636], 40.00th=[ 2265], 50.00th=[ 2366], 60.00th=[ 2702],
00:27:22.968       | 70.00th=[ 4111], 80.00th=[ 4597], 90.00th=[ 4732], 95.00th=[ 4799],
00:27:22.968       | 99.00th=[ 4799], 99.50th=[ 6678], 99.90th=[ 8356], 99.95th=[ 8356],
00:27:22.968       | 99.99th=[ 8356]
00:27:22.968     bw (  KiB/s): min= 8175, max=51200, per=1.15%, avg=33165.60, stdev=14782.84, samples=10
00:27:22.968     iops        : min=    7, max=   50, avg=32.20, stdev=14.51, samples=10
00:27:22.968    lat (msec)   : 50=2.78%, 100=4.44%, 250=7.22%, 500=4.72%, 750=3.06%
00:27:22.968    lat (msec)   : 1000=2.22%, 2000=11.39%, >=2000=64.17%
00:27:22.968    cpu          : usr=0.00%, sys=0.90%, ctx=1063, majf=0, minf=32769
00:27:22.968    IO depths    : 1=0.3%, 2=0.6%, 4=1.1%, 8=2.2%, 16=4.4%, 32=8.9%, >=64=82.5%
00:27:22.968       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.968       complete  : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4%
00:27:22.969       issued rwts: total=360,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.969       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.969  job2: (groupid=0, jobs=1): err= 0: pid=3418541: Sat Dec 14 13:53:20 2024
00:27:22.969    read: IOPS=3, BW=3869KiB/s (3962kB/s)(46.0MiB/12174msec)
00:27:22.969      slat (usec): min=616, max=2090.2k, avg=218844.50, stdev=614690.98
00:27:22.969      clat (msec): min=2105, max=12170, avg=9713.21, stdev=2810.25
00:27:22.969       lat (msec): min=4164, max=12172, avg=9932.06, stdev=2587.86
00:27:22.969      clat percentiles (msec):
00:27:22.969       |  1.00th=[ 2106],  5.00th=[ 4178], 10.00th=[ 4279], 20.00th=[ 8490],
00:27:22.969       | 30.00th=[ 8490], 40.00th=[10671], 50.00th=[10671], 60.00th=[12147],
00:27:22.969       | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147],
00:27:22.969       | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147],
00:27:22.969       | 99.99th=[12147]
00:27:22.969    lat (msec)   : >=2000=100.00%
00:27:22.969    cpu          : usr=0.02%, sys=0.32%, ctx=58, majf=0, minf=11777
00:27:22.969    IO depths    : 1=2.2%, 2=4.3%, 4=8.7%, 8=17.4%, 16=34.8%, 32=32.6%, >=64=0.0%
00:27:22.969       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.969       complete  : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0%
00:27:22.969       issued rwts: total=46,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.969       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.969  job2: (groupid=0, jobs=1): err= 0: pid=3418542: Sat Dec 14 13:53:20 2024
00:27:22.969    read: IOPS=29, BW=29.2MiB/s (30.6MB/s)(357MiB/12230msec)
00:27:22.969      slat (usec): min=129, max=2047.5k, avg=28348.28, stdev=153299.55
00:27:22.969      clat (msec): min=774, max=9652, avg=4178.56, stdev=3018.78
00:27:22.969       lat (msec): min=780, max=9659, avg=4206.91, stdev=3027.27
00:27:22.969      clat percentiles (msec):
00:27:22.969       |  1.00th=[  785],  5.00th=[  835], 10.00th=[  894], 20.00th=[ 1217],
00:27:22.969       | 30.00th=[ 1871], 40.00th=[ 2400], 50.00th=[ 3239], 60.00th=[ 3574],
00:27:22.969       | 70.00th=[ 6678], 80.00th=[ 7617], 90.00th=[ 8926], 95.00th=[ 9329],
00:27:22.969       | 99.00th=[ 9597], 99.50th=[ 9597], 99.90th=[ 9597], 99.95th=[ 9597],
00:27:22.969       | 99.99th=[ 9597]
00:27:22.969     bw (  KiB/s): min= 1503, max=75776, per=1.16%, avg=33602.36, stdev=20148.03, samples=14
00:27:22.969     iops        : min=    1, max=   74, avg=32.64, stdev=19.87, samples=14
00:27:22.969    lat (msec)   : 1000=12.61%, 2000=19.89%, >=2000=67.51%
00:27:22.969    cpu          : usr=0.00%, sys=1.24%, ctx=917, majf=0, minf=32769
00:27:22.969    IO depths    : 1=0.3%, 2=0.6%, 4=1.1%, 8=2.2%, 16=4.5%, 32=9.0%, >=64=82.4%
00:27:22.969       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.969       complete  : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4%
00:27:22.969       issued rwts: total=357,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.969       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.969  job2: (groupid=0, jobs=1): err= 0: pid=3418543: Sat Dec 14 13:53:20 2024
00:27:22.969    read: IOPS=60, BW=60.0MiB/s (63.0MB/s)(604MiB/10059msec)
00:27:22.969      slat (usec): min=41, max=122744, avg=16561.27, stdev=24844.54
00:27:22.969      clat (msec): min=52, max=4369, avg=1953.63, stdev=1209.71
00:27:22.969       lat (msec): min=78, max=4376, avg=1970.19, stdev=1215.55
00:27:22.969      clat percentiles (msec):
00:27:22.969       |  1.00th=[  133],  5.00th=[  334], 10.00th=[  510], 20.00th=[  818],
00:27:22.969       | 30.00th=[ 1334], 40.00th=[ 1687], 50.00th=[ 1720], 60.00th=[ 1787],
00:27:22.969       | 70.00th=[ 2106], 80.00th=[ 3306], 90.00th=[ 4077], 95.00th=[ 4212],
00:27:22.969       | 99.00th=[ 4329], 99.50th=[ 4329], 99.90th=[ 4396], 99.95th=[ 4396],
00:27:22.969       | 99.99th=[ 4396]
00:27:22.969     bw (  KiB/s): min=14336, max=173732, per=1.99%, avg=57434.29, stdev=43295.70, samples=17
00:27:22.969     iops        : min=   14, max=  169, avg=55.94, stdev=42.22, samples=17
00:27:22.969    lat (msec)   : 100=0.66%, 250=2.48%, 500=6.46%, 750=7.45%, 1000=10.60%
00:27:22.969    lat (msec)   : 2000=40.73%, >=2000=31.62%
00:27:22.969    cpu          : usr=0.07%, sys=1.66%, ctx=1478, majf=0, minf=32769
00:27:22.969    IO depths    : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.6%, 32=5.3%, >=64=89.6%
00:27:22.969       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.969       complete  : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2%
00:27:22.969       issued rwts: total=604,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.969       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.969  job2: (groupid=0, jobs=1): err= 0: pid=3418544: Sat Dec 14 13:53:20 2024
00:27:22.969    read: IOPS=25, BW=25.7MiB/s (27.0MB/s)(315MiB/12235msec)
00:27:22.969      slat (usec): min=47, max=2029.8k, avg=32093.34, stdev=155615.36
00:27:22.969      clat (msec): min=1998, max=8751, avg=4572.25, stdev=2423.46
00:27:22.969       lat (msec): min=2001, max=8782, avg=4604.34, stdev=2422.69
00:27:22.969      clat percentiles (msec):
00:27:22.969       |  1.00th=[ 2089],  5.00th=[ 2232], 10.00th=[ 2467], 20.00th=[ 2567],
00:27:22.969       | 30.00th=[ 2702], 40.00th=[ 2735], 50.00th=[ 2836], 60.00th=[ 4597],
00:27:22.969       | 70.00th=[ 6611], 80.00th=[ 7483], 90.00th=[ 8423], 95.00th=[ 8557],
00:27:22.969       | 99.00th=[ 8658], 99.50th=[ 8658], 99.90th=[ 8792], 99.95th=[ 8792],
00:27:22.969       | 99.99th=[ 8792]
00:27:22.969     bw (  KiB/s): min= 1503, max=79872, per=1.11%, avg=32035.42, stdev=23519.01, samples=12
00:27:22.969     iops        : min=    1, max=   78, avg=31.17, stdev=23.04, samples=12
00:27:22.969    lat (msec)   : 2000=0.32%, >=2000=99.68%
00:27:22.969    cpu          : usr=0.01%, sys=0.81%, ctx=1053, majf=0, minf=32769
00:27:22.969    IO depths    : 1=0.3%, 2=0.6%, 4=1.3%, 8=2.5%, 16=5.1%, 32=10.2%, >=64=80.0%
00:27:22.969       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.969       complete  : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5%
00:27:22.969       issued rwts: total=315,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.969       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.969  job3: (groupid=0, jobs=1): err= 0: pid=3418545: Sat Dec 14 13:53:20 2024
00:27:22.969    read: IOPS=97, BW=97.0MiB/s (102MB/s)(976MiB/10059msec)
00:27:22.969      slat (usec): min=41, max=102439, avg=10247.42, stdev=17927.39
00:27:22.969      clat (msec): min=52, max=2951, avg=1243.36, stdev=657.04
00:27:22.969       lat (msec): min=69, max=2961, avg=1253.61, stdev=659.50
00:27:22.969      clat percentiles (msec):
00:27:22.969       |  1.00th=[  186],  5.00th=[  558], 10.00th=[  609], 20.00th=[  785],
00:27:22.969       | 30.00th=[  827], 40.00th=[  835], 50.00th=[  911], 60.00th=[ 1217],
00:27:22.969       | 70.00th=[ 1586], 80.00th=[ 1921], 90.00th=[ 2198], 95.00th=[ 2567],
00:27:22.969       | 99.00th=[ 2836], 99.50th=[ 2903], 99.90th=[ 2937], 99.95th=[ 2937],
00:27:22.969       | 99.99th=[ 2937]
00:27:22.969     bw (  KiB/s): min=30720, max=212992, per=3.34%, avg=96631.72, stdev=61457.86, samples=18
00:27:22.969     iops        : min=   30, max=  208, avg=94.28, stdev=60.01, samples=18
00:27:22.969    lat (msec)   : 100=0.41%, 250=1.13%, 500=2.05%, 750=14.96%, 1000=38.22%
00:27:22.969    lat (msec)   : 2000=26.43%, >=2000=16.80%
00:27:22.969    cpu          : usr=0.03%, sys=2.03%, ctx=1882, majf=0, minf=32769
00:27:22.969    IO depths    : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.3%, >=64=93.5%
00:27:22.969       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.969       complete  : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:27:22.969       issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.969       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.969  job3: (groupid=0, jobs=1): err= 0: pid=3418546: Sat Dec 14 13:53:20 2024
00:27:22.969    read: IOPS=81, BW=81.0MiB/s (85.0MB/s)(814MiB/10044msec)
00:27:22.969      slat (usec): min=54, max=1352.5k, avg=12285.81, stdev=51151.82
00:27:22.969      clat (msec): min=36, max=3901, avg=1190.99, stdev=544.10
00:27:22.969       lat (msec): min=49, max=3904, avg=1203.27, stdev=552.65
00:27:22.969      clat percentiles (msec):
00:27:22.969       |  1.00th=[  124],  5.00th=[  584], 10.00th=[  911], 20.00th=[ 1003],
00:27:22.969       | 30.00th=[ 1020], 40.00th=[ 1045], 50.00th=[ 1062], 60.00th=[ 1083],
00:27:22.969       | 70.00th=[ 1133], 80.00th=[ 1200], 90.00th=[ 1720], 95.00th=[ 2299],
00:27:22.969       | 99.00th=[ 3775], 99.50th=[ 3809], 99.90th=[ 3910], 99.95th=[ 3910],
00:27:22.969       | 99.99th=[ 3910]
00:27:22.969     bw (  KiB/s): min=24526, max=139264, per=3.88%, avg=112123.83, stdev=32348.37, samples=12
00:27:22.969     iops        : min=   23, max=  136, avg=109.42, stdev=31.82, samples=12
00:27:22.969    lat (msec)   : 50=0.25%, 100=0.49%, 250=0.98%, 500=1.97%, 750=3.81%
00:27:22.969    lat (msec)   : 1000=13.76%, 2000=71.25%, >=2000=7.49%
00:27:22.969    cpu          : usr=0.06%, sys=1.86%, ctx=1106, majf=0, minf=32769
00:27:22.969    IO depths    : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=3.9%, >=64=92.3%
00:27:22.969       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.969       complete  : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:27:22.969       issued rwts: total=814,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.969       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.969  job3: (groupid=0, jobs=1): err= 0: pid=3418547: Sat Dec 14 13:53:20 2024
00:27:22.969    read: IOPS=60, BW=60.7MiB/s (63.6MB/s)(618MiB/10186msec)
00:27:22.969      slat (usec): min=40, max=1376.1k, avg=16288.53, stdev=59057.42
00:27:22.969      clat (msec): min=115, max=3296, avg=1690.43, stdev=1001.25
00:27:22.969       lat (msec): min=186, max=3314, avg=1706.72, stdev=1006.33
00:27:22.969      clat percentiles (msec):
00:27:22.969       |  1.00th=[  380],  5.00th=[  523], 10.00th=[  567], 20.00th=[  651],
00:27:22.969       | 30.00th=[  793], 40.00th=[ 1053], 50.00th=[ 1334], 60.00th=[ 2005],
00:27:22.970       | 70.00th=[ 2702], 80.00th=[ 2937], 90.00th=[ 3037], 95.00th=[ 3104],
00:27:22.970       | 99.00th=[ 3205], 99.50th=[ 3239], 99.90th=[ 3306], 99.95th=[ 3306],
00:27:22.970       | 99.99th=[ 3306]
00:27:22.970     bw (  KiB/s): min=18432, max=262144, per=2.32%, avg=66887.40, stdev=67860.95, samples=15
00:27:22.970     iops        : min=   18, max=  256, avg=65.13, stdev=66.34, samples=15
00:27:22.970    lat (msec)   : 250=0.81%, 500=1.94%, 750=25.73%, 1000=8.09%, 2000=23.30%
00:27:22.970    lat (msec)   : >=2000=40.13%
00:27:22.970    cpu          : usr=0.06%, sys=1.62%, ctx=1600, majf=0, minf=32769
00:27:22.970    IO depths    : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.6%, 32=5.2%, >=64=89.8%
00:27:22.970       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.970       complete  : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2%
00:27:22.970       issued rwts: total=618,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.970       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.970  job3: (groupid=0, jobs=1): err= 0: pid=3418548: Sat Dec 14 13:53:20 2024
00:27:22.970    read: IOPS=59, BW=59.4MiB/s (62.3MB/s)(599MiB/10077msec)
00:27:22.970      slat (usec): min=33, max=1389.2k, avg=16709.54, stdev=60118.61
00:27:22.970      clat (msec): min=64, max=3210, avg=1640.95, stdev=737.30
00:27:22.970       lat (msec): min=80, max=3226, avg=1657.66, stdev=742.30
00:27:22.970      clat percentiles (msec):
00:27:22.970       |  1.00th=[   92],  5.00th=[  409], 10.00th=[  793], 20.00th=[  986],
00:27:22.970       | 30.00th=[ 1150], 40.00th=[ 1385], 50.00th=[ 1620], 60.00th=[ 1804],
00:27:22.970       | 70.00th=[ 2039], 80.00th=[ 2333], 90.00th=[ 2769], 95.00th=[ 2836],
00:27:22.970       | 99.00th=[ 3205], 99.50th=[ 3205], 99.90th=[ 3205], 99.95th=[ 3205],
00:27:22.970       | 99.99th=[ 3205]
00:27:22.970     bw (  KiB/s): min=34816, max=128766, per=2.33%, avg=67241.77, stdev=31974.52, samples=13
00:27:22.970     iops        : min=   34, max=  125, avg=65.54, stdev=31.15, samples=13
00:27:22.970    lat (msec)   : 100=1.34%, 250=2.00%, 500=3.51%, 750=3.01%, 1000=11.52%
00:27:22.970    lat (msec)   : 2000=47.08%, >=2000=31.55%
00:27:22.970    cpu          : usr=0.03%, sys=1.07%, ctx=1594, majf=0, minf=32769
00:27:22.970    IO depths    : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.7%, 32=5.3%, >=64=89.5%
00:27:22.970       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.970       complete  : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2%
00:27:22.970       issued rwts: total=599,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.970       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.970  job3: (groupid=0, jobs=1): err= 0: pid=3418549: Sat Dec 14 13:53:20 2024
00:27:22.970    read: IOPS=48, BW=48.9MiB/s (51.3MB/s)(493MiB/10072msec)
00:27:22.970      slat (usec): min=43, max=1387.4k, avg=20330.48, stdev=65748.09
00:27:22.970      clat (msec): min=46, max=3271, avg=2010.00, stdev=564.45
00:27:22.970       lat (msec): min=80, max=3399, avg=2030.33, stdev=567.20
00:27:22.970      clat percentiles (msec):
00:27:22.970       |  1.00th=[  138],  5.00th=[  625], 10.00th=[ 1116], 20.00th=[ 1787],
00:27:22.970       | 30.00th=[ 2005], 40.00th=[ 2123], 50.00th=[ 2198], 60.00th=[ 2232],
00:27:22.970       | 70.00th=[ 2299], 80.00th=[ 2366], 90.00th=[ 2534], 95.00th=[ 2601],
00:27:22.970       | 99.00th=[ 2601], 99.50th=[ 2601], 99.90th=[ 3272], 99.95th=[ 3272],
00:27:22.970       | 99.99th=[ 3272]
00:27:22.970     bw (  KiB/s): min=30720, max=90112, per=1.88%, avg=54175.23, stdev=20638.74, samples=13
00:27:22.970     iops        : min=   30, max=   88, avg=52.77, stdev=20.13, samples=13
00:27:22.970    lat (msec)   : 50=0.20%, 100=0.20%, 250=0.81%, 500=3.04%, 750=2.03%
00:27:22.970    lat (msec)   : 1000=1.01%, 2000=21.50%, >=2000=71.20%
00:27:22.970    cpu          : usr=0.03%, sys=1.08%, ctx=1560, majf=0, minf=32769
00:27:22.970    IO depths    : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.2%, 32=6.5%, >=64=87.2%
00:27:22.970       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.970       complete  : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3%
00:27:22.970       issued rwts: total=493,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.970       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.970  job3: (groupid=0, jobs=1): err= 0: pid=3418550: Sat Dec 14 13:53:20 2024
00:27:22.970    read: IOPS=57, BW=57.8MiB/s (60.6MB/s)(584MiB/10109msec)
00:27:22.970      slat (usec): min=42, max=129778, avg=17161.99, stdev=24650.19
00:27:22.970      clat (msec): min=82, max=4523, avg=1949.91, stdev=741.30
00:27:22.970       lat (msec): min=113, max=4552, avg=1967.08, stdev=740.61
00:27:22.970      clat percentiles (msec):
00:27:22.970       |  1.00th=[  136],  5.00th=[  743], 10.00th=[ 1099], 20.00th=[ 1250],
00:27:22.970       | 30.00th=[ 1603], 40.00th=[ 1854], 50.00th=[ 2022], 60.00th=[ 2232],
00:27:22.970       | 70.00th=[ 2366], 80.00th=[ 2534], 90.00th=[ 2635], 95.00th=[ 2702],
00:27:22.970       | 99.00th=[ 4463], 99.50th=[ 4463], 99.90th=[ 4530], 99.95th=[ 4530],
00:27:22.970       | 99.99th=[ 4530]
00:27:22.970     bw (  KiB/s): min=10219, max=149504, per=2.04%, avg=58962.40, stdev=32394.85, samples=15
00:27:22.970     iops        : min=    9, max=  146, avg=57.40, stdev=31.71, samples=15
00:27:22.970    lat (msec)   : 100=0.17%, 250=1.71%, 500=1.71%, 750=1.54%, 1000=2.05%
00:27:22.970    lat (msec)   : 2000=41.27%, >=2000=51.54%
00:27:22.970    cpu          : usr=0.03%, sys=1.38%, ctx=1930, majf=0, minf=32769
00:27:22.970    IO depths    : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.7%, 32=5.5%, >=64=89.2%
00:27:22.970       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.970       complete  : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2%
00:27:22.970       issued rwts: total=584,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.970       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.970  job3: (groupid=0, jobs=1): err= 0: pid=3418551: Sat Dec 14 13:53:20 2024
00:27:22.970    read: IOPS=53, BW=53.3MiB/s (55.9MB/s)(538MiB/10098msec)
00:27:22.970      slat (usec): min=36, max=1387.4k, avg=18610.51, stdev=63209.90
00:27:22.970      clat (msec): min=82, max=3427, avg=1886.34, stdev=526.42
00:27:22.970       lat (msec): min=113, max=3441, avg=1904.95, stdev=527.24
00:27:22.970      clat percentiles (msec):
00:27:22.970       |  1.00th=[  226],  5.00th=[  634], 10.00th=[ 1183], 20.00th=[ 1687],
00:27:22.970       | 30.00th=[ 1838], 40.00th=[ 1905], 50.00th=[ 1989], 60.00th=[ 2039],
00:27:22.970       | 70.00th=[ 2106], 80.00th=[ 2198], 90.00th=[ 2333], 95.00th=[ 2400],
00:27:22.970       | 99.00th=[ 3373], 99.50th=[ 3373], 99.90th=[ 3440], 99.95th=[ 3440],
00:27:22.970       | 99.99th=[ 3440]
00:27:22.970     bw (  KiB/s): min=30720, max=104448, per=2.13%, avg=61583.92, stdev=24046.62, samples=13
00:27:22.970     iops        : min=   30, max=  102, avg=60.00, stdev=23.60, samples=13
00:27:22.970    lat (msec)   : 100=0.19%, 250=1.12%, 500=1.86%, 750=2.97%, 1000=3.35%
00:27:22.970    lat (msec)   : 2000=41.08%, >=2000=49.44%
00:27:22.970    cpu          : usr=0.03%, sys=1.15%, ctx=1625, majf=0, minf=32769
00:27:22.970    IO depths    : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=3.0%, 32=5.9%, >=64=88.3%
00:27:22.970       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.970       complete  : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2%
00:27:22.970       issued rwts: total=538,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.970       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.970  job3: (groupid=0, jobs=1): err= 0: pid=3418552: Sat Dec 14 13:53:20 2024
00:27:22.970    read: IOPS=53, BW=53.7MiB/s (56.3MB/s)(541MiB/10081msec)
00:27:22.970      slat (usec): min=470, max=1202.5k, avg=18494.02, stdev=56321.15
00:27:22.970      clat (msec): min=72, max=2914, avg=2059.33, stdev=581.49
00:27:22.970       lat (msec): min=83, max=2924, avg=2077.82, stdev=580.16
00:27:22.970      clat percentiles (msec):
00:27:22.970       |  1.00th=[  236],  5.00th=[  659], 10.00th=[ 1250], 20.00th=[ 1770],
00:27:22.970       | 30.00th=[ 2005], 40.00th=[ 2089], 50.00th=[ 2165], 60.00th=[ 2232],
00:27:22.970       | 70.00th=[ 2366], 80.00th=[ 2467], 90.00th=[ 2702], 95.00th=[ 2735],
00:27:22.970       | 99.00th=[ 2802], 99.50th=[ 2869], 99.90th=[ 2903], 99.95th=[ 2903],
00:27:22.970       | 99.99th=[ 2903]
00:27:22.970     bw (  KiB/s): min=18432, max=106496, per=1.83%, avg=52983.25, stdev=21302.09, samples=16
00:27:22.970     iops        : min=   18, max=  104, avg=51.62, stdev=20.91, samples=16
00:27:22.970    lat (msec)   : 100=0.37%, 250=0.92%, 500=3.33%, 750=1.11%, 1000=2.96%
00:27:22.970    lat (msec)   : 2000=19.78%, >=2000=71.53%
00:27:22.970    cpu          : usr=0.05%, sys=1.45%, ctx=1689, majf=0, minf=32769
00:27:22.970    IO depths    : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=3.0%, 32=5.9%, >=64=88.4%
00:27:22.970       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.970       complete  : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2%
00:27:22.970       issued rwts: total=541,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.970       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.970  job3: (groupid=0, jobs=1): err= 0: pid=3418553: Sat Dec 14 13:53:20 2024
00:27:22.970    read: IOPS=50, BW=50.4MiB/s (52.9MB/s)(511MiB/10129msec)
00:27:22.970      slat (usec): min=42, max=1401.0k, avg=19577.93, stdev=65455.55
00:27:22.970      clat (msec): min=121, max=3005, avg=2072.08, stdev=710.46
00:27:22.970       lat (msec): min=150, max=3012, avg=2091.65, stdev=711.51
00:27:22.970      clat percentiles (msec):
00:27:22.970       |  1.00th=[  188],  5.00th=[  558], 10.00th=[  835], 20.00th=[ 1569],
00:27:22.970       | 30.00th=[ 1871], 40.00th=[ 2056], 50.00th=[ 2198], 60.00th=[ 2400],
00:27:22.970       | 70.00th=[ 2534], 80.00th=[ 2702], 90.00th=[ 2869], 95.00th=[ 2937],
00:27:22.970       | 99.00th=[ 2970], 99.50th=[ 3004], 99.90th=[ 3004], 99.95th=[ 3004],
00:27:22.970       | 99.99th=[ 3004]
00:27:22.970     bw (  KiB/s): min=18432, max=88064, per=1.81%, avg=52412.87, stdev=22584.62, samples=15
00:27:22.970     iops        : min=   18, max=   86, avg=51.07, stdev=22.02, samples=15
00:27:22.970    lat (msec)   : 250=1.17%, 500=2.94%, 750=4.11%, 1000=4.50%, 2000=23.29%
00:27:22.970    lat (msec)   : >=2000=63.99%
00:27:22.970    cpu          : usr=0.05%, sys=1.25%, ctx=1670, majf=0, minf=32769
00:27:22.970    IO depths    : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.1%, 32=6.3%, >=64=87.7%
00:27:22.970       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.970       complete  : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3%
00:27:22.970       issued rwts: total=511,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.970       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.970  job3: (groupid=0, jobs=1): err= 0: pid=3418554: Sat Dec 14 13:53:20 2024
00:27:22.970    read: IOPS=54, BW=54.4MiB/s (57.0MB/s)(546MiB/10037msec)
00:27:22.970      slat (usec): min=55, max=128472, avg=18317.86, stdev=24546.60
00:27:22.970      clat (msec): min=32, max=3223, avg=2069.88, stdev=728.02
00:27:22.970       lat (msec): min=42, max=3266, avg=2088.20, stdev=729.16
00:27:22.970      clat percentiles (msec):
00:27:22.970       |  1.00th=[   79],  5.00th=[  388], 10.00th=[  785], 20.00th=[ 1787],
00:27:22.970       | 30.00th=[ 1938], 40.00th=[ 2056], 50.00th=[ 2198], 60.00th=[ 2299],
00:27:22.970       | 70.00th=[ 2467], 80.00th=[ 2567], 90.00th=[ 2903], 95.00th=[ 3071],
00:27:22.970       | 99.00th=[ 3171], 99.50th=[ 3171], 99.90th=[ 3239], 99.95th=[ 3239],
00:27:22.970       | 99.99th=[ 3239]
00:27:22.970     bw (  KiB/s): min=14307, max=90112, per=1.81%, avg=52279.13, stdev=19468.20, samples=15
00:27:22.970     iops        : min=   13, max=   88, avg=50.87, stdev=19.22, samples=15
00:27:22.970    lat (msec)   : 50=0.55%, 100=0.92%, 250=1.28%, 500=3.30%, 750=3.11%
00:27:22.970    lat (msec)   : 1000=3.11%, 2000=23.26%, >=2000=64.47%
00:27:22.970    cpu          : usr=0.02%, sys=1.26%, ctx=1896, majf=0, minf=32769
00:27:22.970    IO depths    : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=2.9%, 32=5.9%, >=64=88.5%
00:27:22.970       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.970       complete  : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2%
00:27:22.971       issued rwts: total=546,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.971       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.971  job3: (groupid=0, jobs=1): err= 0: pid=3418555: Sat Dec 14 13:53:20 2024
00:27:22.971    read: IOPS=96, BW=96.5MiB/s (101MB/s)(971MiB/10067msec)
00:27:22.971      slat (usec): min=42, max=129914, avg=10310.36, stdev=21129.76
00:27:22.971      clat (msec): min=50, max=2914, avg=1249.41, stdev=1049.34
00:27:22.971       lat (msec): min=82, max=2930, avg=1259.72, stdev=1056.45
00:27:22.971      clat percentiles (msec):
00:27:22.971       |  1.00th=[   91],  5.00th=[  239], 10.00th=[  414], 20.00th=[  414],
00:27:22.971       | 30.00th=[  418], 40.00th=[  447], 50.00th=[  468], 60.00th=[ 1070],
00:27:22.971       | 70.00th=[ 2433], 80.00th=[ 2668], 90.00th=[ 2769], 95.00th=[ 2802],
00:27:22.971       | 99.00th=[ 2869], 99.50th=[ 2903], 99.90th=[ 2903], 99.95th=[ 2903],
00:27:22.971       | 99.99th=[ 2903]
00:27:22.971     bw (  KiB/s): min=32768, max=311296, per=3.52%, avg=101551.88, stdev=103947.97, samples=17
00:27:22.971     iops        : min=   32, max=  304, avg=99.12, stdev=101.55, samples=17
00:27:22.971    lat (msec)   : 100=1.65%, 250=4.43%, 500=46.96%, 750=4.94%, 1000=1.75%
00:27:22.971    lat (msec)   : 2000=5.87%, >=2000=34.40%
00:27:22.971    cpu          : usr=0.05%, sys=1.90%, ctx=1943, majf=0, minf=32769
00:27:22.971    IO depths    : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.3%, >=64=93.5%
00:27:22.971       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.971       complete  : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:27:22.971       issued rwts: total=971,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.971       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.971  job3: (groupid=0, jobs=1): err= 0: pid=3418556: Sat Dec 14 13:53:20 2024
00:27:22.971    read: IOPS=42, BW=42.5MiB/s (44.6MB/s)(427MiB/10037msec)
00:27:22.971      slat (usec): min=79, max=1392.9k, avg=23434.18, stdev=71129.65
00:27:22.971      clat (msec): min=27, max=3684, avg=2160.15, stdev=916.74
00:27:22.971       lat (msec): min=37, max=3789, avg=2183.58, stdev=922.19
00:27:22.971      clat percentiles (msec):
00:27:22.971       |  1.00th=[   43],  5.00th=[  100], 10.00th=[  334], 20.00th=[ 1401],
00:27:22.971       | 30.00th=[ 2265], 40.00th=[ 2400], 50.00th=[ 2601], 60.00th=[ 2702],
00:27:22.971       | 70.00th=[ 2735], 80.00th=[ 2802], 90.00th=[ 2903], 95.00th=[ 2937],
00:27:22.971       | 99.00th=[ 3004], 99.50th=[ 3037], 99.90th=[ 3675], 99.95th=[ 3675],
00:27:22.971       | 99.99th=[ 3675]
00:27:22.971     bw (  KiB/s): min=18395, max=65536, per=1.46%, avg=42146.42, stdev=13719.99, samples=12
00:27:22.971     iops        : min=   17, max=   64, avg=41.00, stdev=13.62, samples=12
00:27:22.971    lat (msec)   : 50=1.64%, 100=3.51%, 250=3.04%, 500=3.28%, 750=3.75%
00:27:22.971    lat (msec)   : 1000=2.11%, 2000=7.49%, >=2000=75.18%
00:27:22.971    cpu          : usr=0.02%, sys=1.23%, ctx=1477, majf=0, minf=32769
00:27:22.971    IO depths    : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.9%, 16=3.7%, 32=7.5%, >=64=85.2%
00:27:22.971       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.971       complete  : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3%
00:27:22.971       issued rwts: total=427,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.971       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.971  job3: (groupid=0, jobs=1): err= 0: pid=3418557: Sat Dec 14 13:53:20 2024
00:27:22.971    read: IOPS=60, BW=60.5MiB/s (63.4MB/s)(607MiB/10034msec)
00:27:22.971      slat (usec): min=61, max=136854, avg=16468.01, stdev=20614.31
00:27:22.971      clat (msec): min=33, max=2945, avg=1938.35, stdev=591.08
00:27:22.971       lat (msec): min=36, max=2960, avg=1954.82, stdev=591.30
00:27:22.971      clat percentiles (msec):
00:27:22.971       |  1.00th=[  116],  5.00th=[  877], 10.00th=[ 1334], 20.00th=[ 1569],
00:27:22.971       | 30.00th=[ 1636], 40.00th=[ 1703], 50.00th=[ 1989], 60.00th=[ 2106],
00:27:22.971       | 70.00th=[ 2265], 80.00th=[ 2433], 90.00th=[ 2735], 95.00th=[ 2836],
00:27:22.971       | 99.00th=[ 2903], 99.50th=[ 2903], 99.90th=[ 2937], 99.95th=[ 2937],
00:27:22.971       | 99.99th=[ 2937]
00:27:22.971     bw (  KiB/s): min=38912, max=108544, per=2.05%, avg=59114.94, stdev=19430.25, samples=16
00:27:22.971     iops        : min=   38, max=  106, avg=57.56, stdev=19.01, samples=16
00:27:22.971    lat (msec)   : 50=0.49%, 100=0.33%, 250=0.82%, 500=1.32%, 750=1.15%
00:27:22.971    lat (msec)   : 1000=1.65%, 2000=45.14%, >=2000=49.09%
00:27:22.971    cpu          : usr=0.08%, sys=1.59%, ctx=1818, majf=0, minf=32769
00:27:22.971    IO depths    : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.6%, 32=5.3%, >=64=89.6%
00:27:22.971       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.971       complete  : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2%
00:27:22.971       issued rwts: total=607,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.971       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.971  job4: (groupid=0, jobs=1): err= 0: pid=3418560: Sat Dec 14 13:53:20 2024
00:27:22.971    read: IOPS=16, BW=16.8MiB/s (17.6MB/s)(205MiB/12220msec)
00:27:22.971      slat (usec): min=46, max=2124.0k, avg=49340.27, stdev=255811.06
00:27:22.971      clat (msec): min=2104, max=8458, avg=5537.92, stdev=924.04
00:27:22.971       lat (msec): min=2755, max=8501, avg=5587.26, stdev=865.37
00:27:22.971      clat percentiles (msec):
00:27:22.971       |  1.00th=[ 2735],  5.00th=[ 2802], 10.00th=[ 4329], 20.00th=[ 5470],
00:27:22.971       | 30.00th=[ 5604], 40.00th=[ 5671], 50.00th=[ 5738], 60.00th=[ 5873],
00:27:22.971       | 70.00th=[ 5940], 80.00th=[ 6074], 90.00th=[ 6208], 95.00th=[ 6275],
00:27:22.971       | 99.00th=[ 6409], 99.50th=[ 6409], 99.90th=[ 8490], 99.95th=[ 8490],
00:27:22.971       | 99.99th=[ 8490]
00:27:22.971     bw (  KiB/s): min= 1519, max=86016, per=1.10%, avg=31842.00, stdev=40765.83, samples=5
00:27:22.971     iops        : min=    1, max=   84, avg=30.80, stdev=40.08, samples=5
00:27:22.971    lat (msec)   : >=2000=100.00%
00:27:22.971    cpu          : usr=0.01%, sys=0.73%, ctx=466, majf=0, minf=32769
00:27:22.971    IO depths    : 1=0.5%, 2=1.0%, 4=2.0%, 8=3.9%, 16=7.8%, 32=15.6%, >=64=69.3%
00:27:22.971       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.971       complete  : 0=0.0%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.3%
00:27:22.971       issued rwts: total=205,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.971       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.971  job4: (groupid=0, jobs=1): err= 0: pid=3418561: Sat Dec 14 13:53:20 2024
00:27:22.971    read: IOPS=26, BW=26.1MiB/s (27.3MB/s)(265MiB/10171msec)
00:27:22.971      slat (usec): min=42, max=2213.4k, avg=37745.82, stdev=230064.10
00:27:22.971      clat (msec): min=167, max=9106, avg=2853.85, stdev=3466.48
00:27:22.971       lat (msec): min=225, max=9108, avg=2891.60, stdev=3487.75
00:27:22.971      clat percentiles (msec):
00:27:22.971       |  1.00th=[  228],  5.00th=[  262], 10.00th=[  380], 20.00th=[  600],
00:27:22.971       | 30.00th=[  802], 40.00th=[  869], 50.00th=[  927], 60.00th=[ 1020],
00:27:22.971       | 70.00th=[ 1234], 80.00th=[ 8154], 90.00th=[ 8926], 95.00th=[ 9060],
00:27:22.971       | 99.00th=[ 9060], 99.50th=[ 9060], 99.90th=[ 9060], 99.95th=[ 9060],
00:27:22.971       | 99.99th=[ 9060]
00:27:22.971     bw (  KiB/s): min=129024, max=153600, per=4.89%, avg=141312.00, stdev=17377.86, samples=2
00:27:22.971     iops        : min=  126, max=  150, avg=138.00, stdev=16.97, samples=2
00:27:22.971    lat (msec)   : 250=1.89%, 500=13.58%, 750=14.34%, 1000=25.66%, 2000=17.36%
00:27:22.971    lat (msec)   : >=2000=27.17%
00:27:22.971    cpu          : usr=0.00%, sys=1.07%, ctx=575, majf=0, minf=32769
00:27:22.971    IO depths    : 1=0.4%, 2=0.8%, 4=1.5%, 8=3.0%, 16=6.0%, 32=12.1%, >=64=76.2%
00:27:22.971       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.971       complete  : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7%
00:27:22.971       issued rwts: total=265,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.971       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.971  job4: (groupid=0, jobs=1): err= 0: pid=3418562: Sat Dec 14 13:53:20 2024
00:27:22.971    read: IOPS=84, BW=84.1MiB/s (88.2MB/s)(1031MiB/12255msec)
00:27:22.971      slat (usec): min=35, max=2105.0k, avg=9841.39, stdev=91633.24
00:27:22.971      clat (msec): min=272, max=6693, avg=1341.39, stdev=1975.78
00:27:22.971       lat (msec): min=274, max=6693, avg=1351.23, stdev=1981.55
00:27:22.971      clat percentiles (msec):
00:27:22.971       |  1.00th=[  275],  5.00th=[  275], 10.00th=[  279], 20.00th=[  279],
00:27:22.971       | 30.00th=[  292], 40.00th=[  405], 50.00th=[  542], 60.00th=[  844],
00:27:22.971       | 70.00th=[  877], 80.00th=[ 1036], 90.00th=[ 6477], 95.00th=[ 6611],
00:27:22.971       | 99.00th=[ 6678], 99.50th=[ 6678], 99.90th=[ 6678], 99.95th=[ 6678],
00:27:22.971       | 99.99th=[ 6678]
00:27:22.971     bw (  KiB/s): min= 1438, max=468992, per=5.82%, avg=168214.18, stdev=153032.38, samples=11
00:27:22.971     iops        : min=    1, max=  458, avg=164.18, stdev=149.47, samples=11
00:27:22.971    lat (msec)   : 500=49.18%, 750=3.01%, 1000=23.57%, 2000=10.86%, >=2000=13.39%
00:27:22.971    cpu          : usr=0.05%, sys=1.34%, ctx=1263, majf=0, minf=32769
00:27:22.971    IO depths    : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.1%, >=64=93.9%
00:27:22.971       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.971       complete  : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:27:22.971       issued rwts: total=1031,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.971       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.971  job4: (groupid=0, jobs=1): err= 0: pid=3418563: Sat Dec 14 13:53:20 2024
00:27:22.971    read: IOPS=27, BW=27.4MiB/s (28.7MB/s)(337MiB/12305msec)
00:27:22.971      slat (usec): min=50, max=2117.8k, avg=30265.75, stdev=174796.38
00:27:22.971      clat (msec): min=1943, max=8455, avg=4061.23, stdev=1394.91
00:27:22.971       lat (msec): min=1947, max=8485, avg=4091.50, stdev=1411.52
00:27:22.971      clat percentiles (msec):
00:27:22.971       |  1.00th=[ 1955],  5.00th=[ 1955], 10.00th=[ 2005], 20.00th=[ 2702],
00:27:22.971       | 30.00th=[ 2869], 40.00th=[ 3775], 50.00th=[ 4044], 60.00th=[ 4279],
00:27:22.971       | 70.00th=[ 5336], 80.00th=[ 5470], 90.00th=[ 5805], 95.00th=[ 5940],
00:27:22.971       | 99.00th=[ 6342], 99.50th=[ 6409], 99.90th=[ 8423], 99.95th=[ 8423],
00:27:22.971       | 99.99th=[ 8423]
00:27:22.971     bw (  KiB/s): min= 2007, max=145408, per=2.13%, avg=61421.86, stdev=52931.45, samples=7
00:27:22.971     iops        : min=    1, max=  142, avg=59.71, stdev=51.93, samples=7
00:27:22.971    lat (msec)   : 2000=7.42%, >=2000=92.58%
00:27:22.971    cpu          : usr=0.01%, sys=0.88%, ctx=742, majf=0, minf=32769
00:27:22.971    IO depths    : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.4%, 16=4.7%, 32=9.5%, >=64=81.3%
00:27:22.971       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.971       complete  : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5%
00:27:22.971       issued rwts: total=337,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.971       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.971  job4: (groupid=0, jobs=1): err= 0: pid=3418564: Sat Dec 14 13:53:20 2024
00:27:22.971    read: IOPS=5, BW=5804KiB/s (5943kB/s)(69.0MiB/12174msec)
00:27:22.971      slat (usec): min=463, max=2095.8k, avg=145589.83, stdev=476077.33
00:27:22.971      clat (msec): min=2127, max=12147, avg=9985.19, stdev=2492.69
00:27:22.971       lat (msec): min=4178, max=12173, avg=10130.78, stdev=2313.95
00:27:22.971      clat percentiles (msec):
00:27:22.971       |  1.00th=[ 2123],  5.00th=[ 4178], 10.00th=[ 4279], 20.00th=[ 8423],
00:27:22.971       | 30.00th=[10671], 40.00th=[10805], 50.00th=[11073], 60.00th=[11208],
00:27:22.971       | 70.00th=[11476], 80.00th=[11745], 90.00th=[11879], 95.00th=[12013],
00:27:22.971       | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147],
00:27:22.971       | 99.99th=[12147]
00:27:22.971    lat (msec)   : >=2000=100.00%
00:27:22.972    cpu          : usr=0.00%, sys=0.40%, ctx=294, majf=0, minf=17665
00:27:22.972    IO depths    : 1=1.4%, 2=2.9%, 4=5.8%, 8=11.6%, 16=23.2%, 32=46.4%, >=64=8.7%
00:27:22.972       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.972       complete  : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0%
00:27:22.972       issued rwts: total=69,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.972       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.972  job4: (groupid=0, jobs=1): err= 0: pid=3418565: Sat Dec 14 13:53:20 2024
00:27:22.972    read: IOPS=16, BW=16.0MiB/s (16.8MB/s)(197MiB/12310msec)
00:27:22.972      slat (usec): min=630, max=2217.3k, avg=51691.73, stdev=272100.17
00:27:22.972      clat (msec): min=2125, max=11001, avg=5921.89, stdev=2440.70
00:27:22.972       lat (msec): min=3356, max=11015, avg=5973.58, stdev=2457.88
00:27:22.972      clat percentiles (msec):
00:27:22.972       |  1.00th=[ 3373],  5.00th=[ 3507], 10.00th=[ 3608], 20.00th=[ 3742],
00:27:22.972       | 30.00th=[ 3876], 40.00th=[ 4010], 50.00th=[ 4144], 60.00th=[ 7617],
00:27:22.972       | 70.00th=[ 8087], 80.00th=[ 8423], 90.00th=[ 8490], 95.00th=[10805],
00:27:22.972       | 99.00th=[10939], 99.50th=[10939], 99.90th=[10939], 99.95th=[10939],
00:27:22.972       | 99.99th=[10939]
00:27:22.972     bw (  KiB/s): min= 1984, max=98304, per=1.65%, avg=47765.33, stdev=48335.91, samples=3
00:27:22.972     iops        : min=    1, max=   96, avg=46.33, stdev=47.65, samples=3
00:27:22.972    lat (msec)   : >=2000=100.00%
00:27:22.972    cpu          : usr=0.02%, sys=0.93%, ctx=548, majf=0, minf=32331
00:27:22.972    IO depths    : 1=0.5%, 2=1.0%, 4=2.0%, 8=4.1%, 16=8.1%, 32=16.2%, >=64=68.0%
00:27:22.972       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.972       complete  : 0=0.0%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.4%
00:27:22.972       issued rwts: total=197,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.972       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.972  job4: (groupid=0, jobs=1): err= 0: pid=3418566: Sat Dec 14 13:53:20 2024
00:27:22.972    read: IOPS=20, BW=20.4MiB/s (21.4MB/s)(208MiB/10211msec)
00:27:22.972      slat (usec): min=67, max=2119.0k, avg=48757.73, stdev=270013.55
00:27:22.972      clat (msec): min=67, max=9600, avg=5950.20, stdev=3649.75
00:27:22.972       lat (msec): min=859, max=9601, avg=5998.96, stdev=3632.70
00:27:22.972      clat percentiles (msec):
00:27:22.972       |  1.00th=[  844],  5.00th=[  885], 10.00th=[  902], 20.00th=[  944],
00:27:22.972       | 30.00th=[ 2022], 40.00th=[ 5403], 50.00th=[ 7684], 60.00th=[ 8926],
00:27:22.972       | 70.00th=[ 9194], 80.00th=[ 9329], 90.00th=[ 9463], 95.00th=[ 9597],
00:27:22.972       | 99.00th=[ 9597], 99.50th=[ 9597], 99.90th=[ 9597], 99.95th=[ 9597],
00:27:22.972       | 99.99th=[ 9597]
00:27:22.972     bw (  KiB/s): min= 2048, max=55296, per=0.71%, avg=20478.38, stdev=20334.50, samples=8
00:27:22.972     iops        : min=    2, max=   54, avg=19.88, stdev=19.96, samples=8
00:27:22.972    lat (msec)   : 100=0.48%, 1000=21.15%, 2000=7.21%, >=2000=71.15%
00:27:22.972    cpu          : usr=0.00%, sys=1.41%, ctx=379, majf=0, minf=32769
00:27:22.972    IO depths    : 1=0.5%, 2=1.0%, 4=1.9%, 8=3.8%, 16=7.7%, 32=15.4%, >=64=69.7%
00:27:22.972       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.972       complete  : 0=0.0%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.2%
00:27:22.972       issued rwts: total=208,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.972       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.972  job4: (groupid=0, jobs=1): err= 0: pid=3418567: Sat Dec 14 13:53:20 2024
00:27:22.972    read: IOPS=5, BW=5363KiB/s (5492kB/s)(64.0MiB/12220msec)
00:27:22.972      slat (usec): min=415, max=2091.7k, avg=157693.55, stdev=497735.75
00:27:22.972      clat (msec): min=2127, max=12213, avg=7738.77, stdev=2746.09
00:27:22.972       lat (msec): min=4169, max=12219, avg=7896.47, stdev=2708.26
00:27:22.972      clat percentiles (msec):
00:27:22.972       |  1.00th=[ 2123],  5.00th=[ 4245], 10.00th=[ 5940], 20.00th=[ 6007],
00:27:22.972       | 30.00th=[ 6141], 40.00th=[ 6208], 50.00th=[ 6275], 60.00th=[ 6409],
00:27:22.972       | 70.00th=[ 8490], 80.00th=[12013], 90.00th=[12147], 95.00th=[12147],
00:27:22.972       | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147],
00:27:22.972       | 99.99th=[12147]
00:27:22.972    lat (msec)   : >=2000=100.00%
00:27:22.972    cpu          : usr=0.02%, sys=0.36%, ctx=162, majf=0, minf=16385
00:27:22.972    IO depths    : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6%
00:27:22.972       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.972       complete  : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0%
00:27:22.972       issued rwts: total=64,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.972       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.972  job4: (groupid=0, jobs=1): err= 0: pid=3418568: Sat Dec 14 13:53:20 2024
00:27:22.972    read: IOPS=11, BW=11.9MiB/s (12.5MB/s)(146MiB/12282msec)
00:27:22.972      slat (usec): min=432, max=2057.1k, avg=69717.94, stdev=306844.37
00:27:22.972      clat (msec): min=2102, max=11950, avg=7669.02, stdev=3421.66
00:27:22.972       lat (msec): min=3359, max=11953, avg=7738.73, stdev=3408.99
00:27:22.972      clat percentiles (msec):
00:27:22.972       |  1.00th=[ 3373],  5.00th=[ 3473], 10.00th=[ 3574], 20.00th=[ 3775],
00:27:22.972       | 30.00th=[ 3977], 40.00th=[ 4279], 50.00th=[ 8658], 60.00th=[10537],
00:27:22.972       | 70.00th=[10805], 80.00th=[11208], 90.00th=[11610], 95.00th=[11745],
00:27:22.972       | 99.00th=[11879], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013],
00:27:22.972       | 99.99th=[12013]
00:27:22.972     bw (  KiB/s): min= 2052, max=36864, per=0.67%, avg=19458.00, stdev=24615.80, samples=2
00:27:22.972     iops        : min=    2, max=   36, avg=19.00, stdev=24.04, samples=2
00:27:22.972    lat (msec)   : >=2000=100.00%
00:27:22.972    cpu          : usr=0.01%, sys=0.79%, ctx=399, majf=0, minf=32769
00:27:22.972    IO depths    : 1=0.7%, 2=1.4%, 4=2.7%, 8=5.5%, 16=11.0%, 32=21.9%, >=64=56.8%
00:27:22.972       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.972       complete  : 0=0.0%, 4=95.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=5.0%
00:27:22.972       issued rwts: total=146,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.972       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.972  job4: (groupid=0, jobs=1): err= 0: pid=3418569: Sat Dec 14 13:53:20 2024
00:27:22.972    read: IOPS=18, BW=18.5MiB/s (19.4MB/s)(227MiB/12291msec)
00:27:22.972      slat (usec): min=47, max=2100.8k, avg=44774.22, stdev=255095.27
00:27:22.972      clat (msec): min=456, max=8455, avg=4132.69, stdev=2241.62
00:27:22.972       lat (msec): min=456, max=8499, avg=4177.46, stdev=2254.44
00:27:22.972      clat percentiles (msec):
00:27:22.972       |  1.00th=[  460],  5.00th=[  527], 10.00th=[  617], 20.00th=[  969],
00:27:22.972       | 30.00th=[ 4178], 40.00th=[ 4933], 50.00th=[ 5067], 60.00th=[ 5134],
00:27:22.972       | 70.00th=[ 5269], 80.00th=[ 5336], 90.00th=[ 7013], 95.00th=[ 7013],
00:27:22.972       | 99.00th=[ 7013], 99.50th=[ 7013], 99.90th=[ 8423], 99.95th=[ 8423],
00:27:22.972       | 99.99th=[ 8423]
00:27:22.972     bw (  KiB/s): min= 2043, max=198656, per=1.77%, avg=51199.75, stdev=98304.17, samples=4
00:27:22.972     iops        : min=    1, max=  194, avg=49.75, stdev=96.17, samples=4
00:27:22.972    lat (msec)   : 500=2.64%, 750=9.69%, 1000=9.25%, 2000=7.05%, >=2000=71.37%
00:27:22.972    cpu          : usr=0.00%, sys=0.85%, ctx=380, majf=0, minf=32769
00:27:22.972    IO depths    : 1=0.4%, 2=0.9%, 4=1.8%, 8=3.5%, 16=7.0%, 32=14.1%, >=64=72.2%
00:27:22.972       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.972       complete  : 0=0.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.0%
00:27:22.972       issued rwts: total=227,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.972       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.972  job4: (groupid=0, jobs=1): err= 0: pid=3418570: Sat Dec 14 13:53:20 2024
00:27:22.972    read: IOPS=14, BW=14.9MiB/s (15.6MB/s)(183MiB/12308msec)
00:27:22.972      slat (usec): min=120, max=2176.7k, avg=55622.27, stdev=290035.64
00:27:22.972      clat (msec): min=1777, max=12049, avg=8158.50, stdev=3245.71
00:27:22.972       lat (msec): min=1779, max=12059, avg=8214.13, stdev=3227.11
00:27:22.972      clat percentiles (msec):
00:27:22.972       |  1.00th=[ 1804],  5.00th=[ 3775], 10.00th=[ 3977], 20.00th=[ 4144],
00:27:22.972       | 30.00th=[ 5940], 40.00th=[ 8087], 50.00th=[ 8154], 60.00th=[10537],
00:27:22.972       | 70.00th=[10805], 80.00th=[11342], 90.00th=[11879], 95.00th=[12013],
00:27:22.972       | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013],
00:27:22.972       | 99.99th=[12013]
00:27:22.972     bw (  KiB/s): min= 1984, max=90112, per=0.79%, avg=22924.80, stdev=38032.17, samples=5
00:27:22.972     iops        : min=    1, max=   88, avg=22.20, stdev=37.27, samples=5
00:27:22.972    lat (msec)   : 2000=4.37%, >=2000=95.63%
00:27:22.972    cpu          : usr=0.00%, sys=1.01%, ctx=385, majf=0, minf=32769
00:27:22.972    IO depths    : 1=0.5%, 2=1.1%, 4=2.2%, 8=4.4%, 16=8.7%, 32=17.5%, >=64=65.6%
00:27:22.972       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.972       complete  : 0=0.0%, 4=98.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.8%
00:27:22.972       issued rwts: total=183,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.972       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.972  job4: (groupid=0, jobs=1): err= 0: pid=3418571: Sat Dec 14 13:53:20 2024
00:27:22.972    read: IOPS=46, BW=46.0MiB/s (48.3MB/s)(561MiB/12183msec)
00:27:22.972      slat (usec): min=34, max=4063.6k, avg=17919.07, stdev=184494.33
00:27:22.972      clat (msec): min=743, max=7218, avg=2387.49, stdev=2390.06
00:27:22.972       lat (msec): min=744, max=7220, avg=2405.41, stdev=2394.53
00:27:22.972      clat percentiles (msec):
00:27:22.972       |  1.00th=[  743],  5.00th=[  768], 10.00th=[  785], 20.00th=[  827],
00:27:22.972       | 30.00th=[  835], 40.00th=[  885], 50.00th=[  936], 60.00th=[ 1569],
00:27:22.972       | 70.00th=[ 1620], 80.00th=[ 6409], 90.00th=[ 6812], 95.00th=[ 7013],
00:27:22.972       | 99.00th=[ 7148], 99.50th=[ 7215], 99.90th=[ 7215], 99.95th=[ 7215],
00:27:22.972       | 99.99th=[ 7215]
00:27:22.972     bw (  KiB/s): min= 1600, max=157696, per=3.84%, avg=111040.25, stdev=52394.84, samples=8
00:27:22.972     iops        : min=    1, max=  154, avg=108.25, stdev=51.31, samples=8
00:27:22.972    lat (msec)   : 750=2.14%, 1000=48.31%, 2000=23.71%, >=2000=25.85%
00:27:22.972    cpu          : usr=0.04%, sys=1.12%, ctx=539, majf=0, minf=32769
00:27:22.972    IO depths    : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=2.9%, 32=5.7%, >=64=88.8%
00:27:22.972       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.972       complete  : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2%
00:27:22.972       issued rwts: total=561,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.972       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.972  job4: (groupid=0, jobs=1): err= 0: pid=3418572: Sat Dec 14 13:53:20 2024
00:27:22.972    read: IOPS=48, BW=48.4MiB/s (50.7MB/s)(588MiB/12150msec)
00:27:22.972      slat (usec): min=41, max=2084.0k, avg=17008.04, stdev=155936.33
00:27:22.972      clat (msec): min=315, max=8571, avg=1231.39, stdev=1671.75
00:27:22.972       lat (msec): min=321, max=8592, avg=1248.40, stdev=1698.47
00:27:22.972      clat percentiles (msec):
00:27:22.972       |  1.00th=[  334],  5.00th=[  380], 10.00th=[  414], 20.00th=[  418],
00:27:22.972       | 30.00th=[  418], 40.00th=[  422], 50.00th=[  430], 60.00th=[  523],
00:27:22.972       | 70.00th=[  709], 80.00th=[ 2232], 90.00th=[ 2400], 95.00th=[ 2937],
00:27:22.972       | 99.00th=[ 8557], 99.50th=[ 8557], 99.90th=[ 8557], 99.95th=[ 8557],
00:27:22.972       | 99.99th=[ 8557]
00:27:22.972     bw (  KiB/s): min=268288, max=313344, per=10.15%, avg=293245.67, stdev=22917.69, samples=3
00:27:22.972     iops        : min=  262, max=  306, avg=286.33, stdev=22.37, samples=3
00:27:22.972    lat (msec)   : 500=58.16%, 750=13.10%, 1000=1.02%, >=2000=27.72%
00:27:22.973    cpu          : usr=0.04%, sys=1.11%, ctx=651, majf=0, minf=32769
00:27:22.973    IO depths    : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.7%, 32=5.4%, >=64=89.3%
00:27:22.973       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.973       complete  : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2%
00:27:22.973       issued rwts: total=588,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.973       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.973  job5: (groupid=0, jobs=1): err= 0: pid=3418573: Sat Dec 14 13:53:20 2024
00:27:22.973    read: IOPS=12, BW=12.2MiB/s (12.8MB/s)(124MiB/10174msec)
00:27:22.973      slat (usec): min=405, max=2068.7k, avg=81323.53, stdev=357432.66
00:27:22.973      clat (msec): min=89, max=10172, avg=8282.47, stdev=2298.56
00:27:22.973       lat (msec): min=2107, max=10173, avg=8363.80, stdev=2181.74
00:27:22.973      clat percentiles (msec):
00:27:22.973       |  1.00th=[ 2106],  5.00th=[ 2232], 10.00th=[ 4329], 20.00th=[ 8557],
00:27:22.973       | 30.00th=[ 8792], 40.00th=[ 8926], 50.00th=[ 9060], 60.00th=[ 9194],
00:27:22.973       | 70.00th=[ 9329], 80.00th=[ 9597], 90.00th=[ 9866], 95.00th=[10000],
00:27:22.973       | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134],
00:27:22.973       | 99.99th=[10134]
00:27:22.973    lat (msec)   : 100=0.81%, >=2000=99.19%
00:27:22.973    cpu          : usr=0.01%, sys=1.02%, ctx=276, majf=0, minf=31745
00:27:22.973    IO depths    : 1=0.8%, 2=1.6%, 4=3.2%, 8=6.5%, 16=12.9%, 32=25.8%, >=64=49.2%
00:27:22.973       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.973       complete  : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0%
00:27:22.973       issued rwts: total=124,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.973       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.973  job5: (groupid=0, jobs=1): err= 0: pid=3418574: Sat Dec 14 13:53:20 2024
00:27:22.973    read: IOPS=75, BW=75.4MiB/s (79.0MB/s)(757MiB/10045msec)
00:27:22.973      slat (usec): min=35, max=2039.3k, avg=13204.91, stdev=96824.82
00:27:22.973      clat (msec): min=44, max=5190, avg=1397.21, stdev=1408.35
00:27:22.973       lat (msec): min=69, max=5206, avg=1410.42, stdev=1414.23
00:27:22.973      clat percentiles (msec):
00:27:22.973       |  1.00th=[  117],  5.00th=[  409], 10.00th=[  414], 20.00th=[  439],
00:27:22.973       | 30.00th=[  558], 40.00th=[  768], 50.00th=[  818], 60.00th=[  835],
00:27:22.973       | 70.00th=[  919], 80.00th=[ 2165], 90.00th=[ 4111], 95.00th=[ 4866],
00:27:22.973       | 99.00th=[ 5201], 99.50th=[ 5201], 99.90th=[ 5201], 99.95th=[ 5201],
00:27:22.973       | 99.99th=[ 5201]
00:27:22.973     bw (  KiB/s): min= 6144, max=307200, per=3.80%, avg=109765.00, stdev=91411.15, samples=11
00:27:22.973     iops        : min=    6, max=  300, avg=106.91, stdev=89.30, samples=11
00:27:22.973    lat (msec)   : 50=0.13%, 100=0.40%, 250=2.77%, 500=24.44%, 750=10.83%
00:27:22.973    lat (msec)   : 1000=33.03%, 2000=5.94%, >=2000=22.46%
00:27:22.973    cpu          : usr=0.06%, sys=1.57%, ctx=988, majf=0, minf=32769
00:27:22.973    IO depths    : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.1%, 32=4.2%, >=64=91.7%
00:27:22.973       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.973       complete  : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2%
00:27:22.973       issued rwts: total=757,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.973       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.973  job5: (groupid=0, jobs=1): err= 0: pid=3418575: Sat Dec 14 13:53:20 2024
00:27:22.973    read: IOPS=57, BW=58.0MiB/s (60.8MB/s)(585MiB/10090msec)
00:27:22.973      slat (usec): min=42, max=1952.5k, avg=17123.88, stdev=82515.61
00:27:22.973      clat (msec): min=68, max=3983, avg=1948.78, stdev=1061.00
00:27:22.973       lat (msec): min=99, max=3987, avg=1965.90, stdev=1062.94
00:27:22.973      clat percentiles (msec):
00:27:22.973       |  1.00th=[  144],  5.00th=[  422], 10.00th=[  684], 20.00th=[ 1401],
00:27:22.973       | 30.00th=[ 1519], 40.00th=[ 1586], 50.00th=[ 1670], 60.00th=[ 1787],
00:27:22.973       | 70.00th=[ 1871], 80.00th=[ 3507], 90.00th=[ 3876], 95.00th=[ 3943],
00:27:22.973       | 99.00th=[ 3977], 99.50th=[ 3977], 99.90th=[ 3977], 99.95th=[ 3977],
00:27:22.973       | 99.99th=[ 3977]
00:27:22.973     bw (  KiB/s): min= 2048, max=126976, per=2.09%, avg=60387.57, stdev=37726.15, samples=14
00:27:22.973     iops        : min=    2, max=  124, avg=58.71, stdev=36.92, samples=14
00:27:22.973    lat (msec)   : 100=0.34%, 250=1.37%, 500=4.44%, 750=5.30%, 1000=3.93%
00:27:22.973    lat (msec)   : 2000=59.32%, >=2000=25.30%
00:27:22.973    cpu          : usr=0.05%, sys=1.02%, ctx=1473, majf=0, minf=32769
00:27:22.973    IO depths    : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.7%, 32=5.5%, >=64=89.2%
00:27:22.973       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.973       complete  : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2%
00:27:22.973       issued rwts: total=585,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.973       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.973  job5: (groupid=0, jobs=1): err= 0: pid=3418576: Sat Dec 14 13:53:20 2024
00:27:22.973    read: IOPS=41, BW=41.8MiB/s (43.9MB/s)(422MiB/10090msec)
00:27:22.973      slat (usec): min=44, max=2122.1k, avg=23694.14, stdev=120087.92
00:27:22.973      clat (msec): min=88, max=5667, avg=2768.21, stdev=1690.56
00:27:22.973       lat (msec): min=135, max=6344, avg=2791.91, stdev=1696.58
00:27:22.973      clat percentiles (msec):
00:27:22.973       |  1.00th=[  178],  5.00th=[  368], 10.00th=[  860], 20.00th=[ 1519],
00:27:22.973       | 30.00th=[ 1737], 40.00th=[ 1821], 50.00th=[ 1989], 60.00th=[ 2366],
00:27:22.973       | 70.00th=[ 3809], 80.00th=[ 5403], 90.00th=[ 5470], 95.00th=[ 5604],
00:27:22.973       | 99.00th=[ 5671], 99.50th=[ 5671], 99.90th=[ 5671], 99.95th=[ 5671],
00:27:22.973       | 99.99th=[ 5671]
00:27:22.973     bw (  KiB/s): min= 8192, max=112415, per=1.74%, avg=50357.42, stdev=26827.49, samples=12
00:27:22.973     iops        : min=    8, max=  109, avg=49.08, stdev=26.04, samples=12
00:27:22.973    lat (msec)   : 100=0.24%, 250=1.66%, 500=4.27%, 750=1.90%, 1000=3.55%
00:27:22.973    lat (msec)   : 2000=38.63%, >=2000=49.76%
00:27:22.973    cpu          : usr=0.01%, sys=1.14%, ctx=1260, majf=0, minf=32769
00:27:22.973    IO depths    : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.9%, 16=3.8%, 32=7.6%, >=64=85.1%
00:27:22.973       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.973       complete  : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3%
00:27:22.973       issued rwts: total=422,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.973       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.973  job5: (groupid=0, jobs=1): err= 0: pid=3418577: Sat Dec 14 13:53:20 2024
00:27:22.973    read: IOPS=30, BW=30.3MiB/s (31.8MB/s)(307MiB/10124msec)
00:27:22.973      slat (usec): min=67, max=2052.6k, avg=32621.52, stdev=166131.04
00:27:22.973      clat (msec): min=107, max=6885, avg=3595.36, stdev=2526.54
00:27:22.973       lat (msec): min=175, max=6913, avg=3627.98, stdev=2530.44
00:27:22.973      clat percentiles (msec):
00:27:22.973       |  1.00th=[  253],  5.00th=[  430], 10.00th=[  651], 20.00th=[ 1070],
00:27:22.973       | 30.00th=[ 1469], 40.00th=[ 2123], 50.00th=[ 2366], 60.00th=[ 6074],
00:27:22.973       | 70.00th=[ 6409], 80.00th=[ 6544], 90.00th=[ 6678], 95.00th=[ 6745],
00:27:22.973       | 99.00th=[ 6879], 99.50th=[ 6879], 99.90th=[ 6879], 99.95th=[ 6879],
00:27:22.973       | 99.99th=[ 6879]
00:27:22.973     bw (  KiB/s): min= 6144, max=81920, per=1.41%, avg=40729.67, stdev=26359.08, samples=9
00:27:22.973     iops        : min=    6, max=   80, avg=39.67, stdev=25.87, samples=9
00:27:22.973    lat (msec)   : 250=0.98%, 500=4.23%, 750=6.19%, 1000=5.21%, 2000=22.48%
00:27:22.973    lat (msec)   : >=2000=60.91%
00:27:22.973    cpu          : usr=0.00%, sys=0.90%, ctx=992, majf=0, minf=32769
00:27:22.973    IO depths    : 1=0.3%, 2=0.7%, 4=1.3%, 8=2.6%, 16=5.2%, 32=10.4%, >=64=79.5%
00:27:22.973       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.973       complete  : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6%
00:27:22.973       issued rwts: total=307,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.973       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.973  job5: (groupid=0, jobs=1): err= 0: pid=3418578: Sat Dec 14 13:53:20 2024
00:27:22.973    read: IOPS=129, BW=129MiB/s (135MB/s)(1305MiB/10112msec)
00:27:22.973      slat (usec): min=35, max=144132, avg=7680.56, stdev=16865.78
00:27:22.973      clat (msec): min=79, max=2321, avg=941.13, stdev=401.44
00:27:22.973       lat (msec): min=113, max=2325, avg=948.81, stdev=403.24
00:27:22.973      clat percentiles (msec):
00:27:22.973       |  1.00th=[  372],  5.00th=[  550], 10.00th=[  584], 20.00th=[  701],
00:27:22.973       | 30.00th=[  751], 40.00th=[  802], 50.00th=[  827], 60.00th=[  835],
00:27:22.973       | 70.00th=[  902], 80.00th=[ 1099], 90.00th=[ 1653], 95.00th=[ 1888],
00:27:22.973       | 99.00th=[ 2198], 99.50th=[ 2232], 99.90th=[ 2299], 99.95th=[ 2333],
00:27:22.973       | 99.99th=[ 2333]
00:27:22.973     bw (  KiB/s): min=43008, max=253952, per=4.39%, avg=126818.42, stdev=60017.84, samples=19
00:27:22.973     iops        : min=   42, max=  248, avg=123.74, stdev=58.55, samples=19
00:27:22.973    lat (msec)   : 100=0.08%, 250=0.38%, 500=1.15%, 750=27.59%, 1000=49.89%
00:27:22.973    lat (msec)   : 2000=17.85%, >=2000=3.07%
00:27:22.973    cpu          : usr=0.05%, sys=2.48%, ctx=1488, majf=0, minf=32769
00:27:22.973    IO depths    : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.5%, >=64=95.2%
00:27:22.973       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.973       complete  : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:27:22.973       issued rwts: total=1305,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.973       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.973  job5: (groupid=0, jobs=1): err= 0: pid=3418579: Sat Dec 14 13:53:20 2024
00:27:22.973    read: IOPS=37, BW=37.8MiB/s (39.6MB/s)(382MiB/10109msec)
00:27:22.973      slat (usec): min=39, max=2022.0k, avg=26248.19, stdev=117081.83
00:27:22.973      clat (msec): min=79, max=5974, avg=3038.39, stdev=1941.39
00:27:22.973       lat (msec): min=144, max=5977, avg=3064.63, stdev=1945.72
00:27:22.973      clat percentiles (msec):
00:27:22.973       |  1.00th=[  180],  5.00th=[  309], 10.00th=[  844], 20.00th=[ 1603],
00:27:22.973       | 30.00th=[ 1989], 40.00th=[ 2106], 50.00th=[ 2165], 60.00th=[ 2232],
00:27:22.973       | 70.00th=[ 5470], 80.00th=[ 5604], 90.00th=[ 5805], 95.00th=[ 5873],
00:27:22.973       | 99.00th=[ 5940], 99.50th=[ 5940], 99.90th=[ 6007], 99.95th=[ 6007],
00:27:22.973       | 99.99th=[ 6007]
00:27:22.974     bw (  KiB/s): min= 4087, max=71823, per=1.44%, avg=41523.27, stdev=24766.64, samples=11
00:27:22.974     iops        : min=    3, max=   70, avg=40.36, stdev=24.32, samples=11
00:27:22.974    lat (msec)   : 100=0.26%, 250=3.40%, 500=2.88%, 750=3.40%, 1000=1.31%
00:27:22.974    lat (msec)   : 2000=20.16%, >=2000=68.59%
00:27:22.974    cpu          : usr=0.04%, sys=0.96%, ctx=1169, majf=0, minf=32769
00:27:22.974    IO depths    : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.1%, 16=4.2%, 32=8.4%, >=64=83.5%
00:27:22.974       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.974       complete  : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4%
00:27:22.974       issued rwts: total=382,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.974       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.974  job5: (groupid=0, jobs=1): err= 0: pid=3418580: Sat Dec 14 13:53:20 2024
00:27:22.974    read: IOPS=42, BW=42.7MiB/s (44.8MB/s)(429MiB/10046msec)
00:27:22.974      slat (usec): min=74, max=2044.5k, avg=23328.74, stdev=139099.84
00:27:22.974      clat (msec): min=35, max=7084, avg=2853.71, stdev=2585.16
00:27:22.974       lat (msec): min=64, max=7089, avg=2877.04, stdev=2593.04
00:27:22.974      clat percentiles (msec):
00:27:22.974       |  1.00th=[   82],  5.00th=[  338], 10.00th=[  793], 20.00th=[  936],
00:27:22.974       | 30.00th=[  978], 40.00th=[ 1234], 50.00th=[ 1536], 60.00th=[ 1787],
00:27:22.974       | 70.00th=[ 4144], 80.00th=[ 6745], 90.00th=[ 6946], 95.00th=[ 7013],
00:27:22.974       | 99.00th=[ 7013], 99.50th=[ 7080], 99.90th=[ 7080], 99.95th=[ 7080],
00:27:22.974       | 99.99th=[ 7080]
00:27:22.974     bw (  KiB/s): min=10240, max=141312, per=1.78%, avg=51360.42, stdev=33483.56, samples=12
00:27:22.974     iops        : min=   10, max=  138, avg=50.08, stdev=32.68, samples=12
00:27:22.974    lat (msec)   : 50=0.23%, 100=1.17%, 250=2.80%, 500=3.03%, 750=2.33%
00:27:22.974    lat (msec)   : 1000=24.24%, 2000=30.77%, >=2000=35.43%
00:27:22.974    cpu          : usr=0.02%, sys=1.58%, ctx=1044, majf=0, minf=32769
00:27:22.974    IO depths    : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.9%, 16=3.7%, 32=7.5%, >=64=85.3%
00:27:22.974       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.974       complete  : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3%
00:27:22.974       issued rwts: total=429,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.974       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.974  job5: (groupid=0, jobs=1): err= 0: pid=3418581: Sat Dec 14 13:53:20 2024
00:27:22.974    read: IOPS=34, BW=34.0MiB/s (35.7MB/s)(342MiB/10053msec)
00:27:22.974      slat (usec): min=96, max=2051.5k, avg=29257.01, stdev=145995.38
00:27:22.974      clat (msec): min=45, max=6767, avg=3409.70, stdev=2254.97
00:27:22.974       lat (msec): min=61, max=6772, avg=3438.95, stdev=2259.68
00:27:22.974      clat percentiles (msec):
00:27:22.974       |  1.00th=[   74],  5.00th=[  239], 10.00th=[  542], 20.00th=[ 1452],
00:27:22.974       | 30.00th=[ 1921], 40.00th=[ 2165], 50.00th=[ 2265], 60.00th=[ 4245],
00:27:22.974       | 70.00th=[ 4597], 80.00th=[ 6342], 90.00th=[ 6477], 95.00th=[ 6611],
00:27:22.974       | 99.00th=[ 6745], 99.50th=[ 6745], 99.90th=[ 6745], 99.95th=[ 6745],
00:27:22.974       | 99.99th=[ 6745]
00:27:22.974     bw (  KiB/s): min= 4096, max=57229, per=1.17%, avg=33677.55, stdev=18742.06, samples=11
00:27:22.974     iops        : min=    4, max=   55, avg=32.55, stdev=18.13, samples=11
00:27:22.974    lat (msec)   : 50=0.29%, 100=1.46%, 250=3.51%, 500=4.09%, 750=2.34%
00:27:22.974    lat (msec)   : 1000=3.22%, 2000=18.42%, >=2000=66.67%
00:27:22.974    cpu          : usr=0.03%, sys=0.92%, ctx=1086, majf=0, minf=32769
00:27:22.974    IO depths    : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.3%, 16=4.7%, 32=9.4%, >=64=81.6%
00:27:22.974       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.974       complete  : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5%
00:27:22.974       issued rwts: total=342,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.974       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.974  job5: (groupid=0, jobs=1): err= 0: pid=3418582: Sat Dec 14 13:53:20 2024
00:27:22.974    read: IOPS=26, BW=26.4MiB/s (27.7MB/s)(267MiB/10095msec)
00:27:22.974      slat (usec): min=496, max=3553.3k, avg=37503.65, stdev=225413.55
00:27:22.974      clat (msec): min=79, max=7147, avg=3255.35, stdev=2317.98
00:27:22.974       lat (msec): min=106, max=7816, avg=3292.85, stdev=2337.66
00:27:22.974      clat percentiles (msec):
00:27:22.974       |  1.00th=[  114],  5.00th=[  313], 10.00th=[  600], 20.00th=[ 1167],
00:27:22.974       | 30.00th=[ 1720], 40.00th=[ 2567], 50.00th=[ 2735], 60.00th=[ 2769],
00:27:22.974       | 70.00th=[ 4144], 80.00th=[ 6409], 90.00th=[ 6812], 95.00th=[ 7013],
00:27:22.974       | 99.00th=[ 7148], 99.50th=[ 7148], 99.90th=[ 7148], 99.95th=[ 7148],
00:27:22.974       | 99.99th=[ 7148]
00:27:22.974     bw (  KiB/s): min= 4096, max=67584, per=1.41%, avg=40682.57, stdev=25572.61, samples=7
00:27:22.974     iops        : min=    4, max=   66, avg=39.71, stdev=24.96, samples=7
00:27:22.974    lat (msec)   : 100=0.37%, 250=3.37%, 500=3.37%, 750=6.74%, 1000=3.37%
00:27:22.974    lat (msec)   : 2000=21.35%, >=2000=61.42%
00:27:22.974    cpu          : usr=0.01%, sys=1.03%, ctx=902, majf=0, minf=32769
00:27:22.974    IO depths    : 1=0.4%, 2=0.7%, 4=1.5%, 8=3.0%, 16=6.0%, 32=12.0%, >=64=76.4%
00:27:22.974       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.974       complete  : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7%
00:27:22.974       issued rwts: total=267,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.974       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.974  job5: (groupid=0, jobs=1): err= 0: pid=3418583: Sat Dec 14 13:53:20 2024
00:27:22.974    read: IOPS=90, BW=90.4MiB/s (94.8MB/s)(910MiB/10066msec)
00:27:22.974      slat (usec): min=42, max=1315.7k, avg=10991.09, stdev=60686.56
00:27:22.974      clat (msec): min=58, max=5038, avg=1202.33, stdev=591.04
00:27:22.974       lat (msec): min=149, max=5040, avg=1213.32, stdev=593.24
00:27:22.974      clat percentiles (msec):
00:27:22.974       |  1.00th=[  157],  5.00th=[  414], 10.00th=[  709], 20.00th=[  735],
00:27:22.974       | 30.00th=[  776], 40.00th=[  894], 50.00th=[ 1028], 60.00th=[ 1099],
00:27:22.974       | 70.00th=[ 1485], 80.00th=[ 1871], 90.00th=[ 2165], 95.00th=[ 2232],
00:27:22.974       | 99.00th=[ 2265], 99.50th=[ 2265], 99.90th=[ 5067], 99.95th=[ 5067],
00:27:22.974       | 99.99th=[ 5067]
00:27:22.974     bw (  KiB/s): min= 4096, max=178176, per=3.65%, avg=105321.29, stdev=48404.07, samples=14
00:27:22.974     iops        : min=    4, max=  174, avg=102.71, stdev=47.32, samples=14
00:27:22.974    lat (msec)   : 100=0.11%, 250=1.76%, 500=3.30%, 750=21.43%, 1000=19.34%
00:27:22.974    lat (msec)   : 2000=37.36%, >=2000=16.70%
00:27:22.974    cpu          : usr=0.06%, sys=1.82%, ctx=991, majf=0, minf=32769
00:27:22.974    IO depths    : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=3.5%, >=64=93.1%
00:27:22.974       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.974       complete  : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:27:22.974       issued rwts: total=910,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.974       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.974  job5: (groupid=0, jobs=1): err= 0: pid=3418584: Sat Dec 14 13:53:20 2024
00:27:22.974    read: IOPS=36, BW=36.9MiB/s (38.7MB/s)(371MiB/10053msec)
00:27:22.974      slat (usec): min=66, max=2046.8k, avg=26997.37, stdev=119079.00
00:27:22.974      clat (msec): min=34, max=8039, avg=3161.65, stdev=2189.29
00:27:22.974       lat (msec): min=63, max=8072, avg=3188.65, stdev=2192.16
00:27:22.974      clat percentiles (msec):
00:27:22.974       |  1.00th=[   79],  5.00th=[  288], 10.00th=[  709], 20.00th=[ 1452],
00:27:22.974       | 30.00th=[ 2005], 40.00th=[ 2056], 50.00th=[ 2165], 60.00th=[ 2299],
00:27:22.974       | 70.00th=[ 5470], 80.00th=[ 5873], 90.00th=[ 5940], 95.00th=[ 6141],
00:27:22.974       | 99.00th=[ 8020], 99.50th=[ 8020], 99.90th=[ 8020], 99.95th=[ 8020],
00:27:22.974       | 99.99th=[ 8020]
00:27:22.974     bw (  KiB/s): min=16384, max=63488, per=1.44%, avg=41463.75, stdev=17146.09, samples=12
00:27:22.974     iops        : min=   16, max=   62, avg=40.33, stdev=16.92, samples=12
00:27:22.974    lat (msec)   : 50=0.27%, 100=1.08%, 250=2.70%, 500=3.23%, 750=3.50%
00:27:22.974    lat (msec)   : 1000=5.12%, 2000=14.02%, >=2000=70.08%
00:27:22.974    cpu          : usr=0.06%, sys=1.27%, ctx=1212, majf=0, minf=32769
00:27:22.974    IO depths    : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.2%, 16=4.3%, 32=8.6%, >=64=83.0%
00:27:22.974       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.974       complete  : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4%
00:27:22.974       issued rwts: total=371,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.974       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.974  job5: (groupid=0, jobs=1): err= 0: pid=3418585: Sat Dec 14 13:53:20 2024
00:27:22.974    read: IOPS=15, BW=15.1MiB/s (15.8MB/s)(154MiB/10221msec)
00:27:22.974      slat (usec): min=131, max=2069.1k, avg=65780.13, stdev=323671.49
00:27:22.974      clat (msec): min=89, max=9813, avg=7730.03, stdev=2372.04
00:27:22.974       lat (msec): min=1657, max=9873, avg=7795.81, stdev=2295.61
00:27:22.974      clat percentiles (msec):
00:27:22.974       |  1.00th=[ 1653],  5.00th=[ 2232], 10.00th=[ 3742], 20.00th=[ 5873],
00:27:22.974       | 30.00th=[ 7886], 40.00th=[ 8658], 50.00th=[ 8792], 60.00th=[ 8926],
00:27:22.974       | 70.00th=[ 9194], 80.00th=[ 9463], 90.00th=[ 9597], 95.00th=[ 9731],
00:27:22.974       | 99.00th=[ 9866], 99.50th=[ 9866], 99.90th=[ 9866], 99.95th=[ 9866],
00:27:22.974       | 99.99th=[ 9866]
00:27:22.974     bw (  KiB/s): min= 4087, max=16384, per=0.37%, avg=10647.80, stdev=5498.04, samples=5
00:27:22.974     iops        : min=    3, max=   16, avg=10.20, stdev= 5.67, samples=5
00:27:22.974    lat (msec)   : 100=0.65%, 2000=1.30%, >=2000=98.05%
00:27:22.974    cpu          : usr=0.00%, sys=1.08%, ctx=289, majf=0, minf=32769
00:27:22.974    IO depths    : 1=0.6%, 2=1.3%, 4=2.6%, 8=5.2%, 16=10.4%, 32=20.8%, >=64=59.1%
00:27:22.974       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:22.974       complete  : 0=0.0%, 4=96.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=3.6%
00:27:22.974       issued rwts: total=154,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:22.974       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:22.974  
00:27:22.974  Run status group 0 (all jobs):
00:27:22.974     READ: bw=2821MiB/s (2958MB/s), 2605KiB/s-199MiB/s (2668kB/s-209MB/s), io=33.9GiB (36.4GB), run=10034-12310msec
00:27:22.974  
00:27:22.974  Disk stats (read/write):
00:27:22.974    nvme0n1: ios=28697/0, merge=0/0, ticks=5686312/0, in_queue=5686312, util=98.40%
00:27:22.974    nvme1n1: ios=58564/0, merge=0/0, ticks=7148551/0, in_queue=7148551, util=98.43%
00:27:22.974    nvme2n1: ios=41118/0, merge=0/0, ticks=6770146/0, in_queue=6770146, util=98.70%
00:27:22.974    nvme3n1: ios=64672/0, merge=0/0, ticks=6258635/0, in_queue=6258635, util=98.49%
00:27:22.974    nvme4n1: ios=32608/0, merge=0/0, ticks=4654362/0, in_queue=4654362, util=99.14%
00:27:22.974    nvme5n1: ios=49670/0, merge=0/0, ticks=6674249/0, in_queue=6674249, util=98.83%
00:27:22.974   13:53:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@38 -- # sync
00:27:22.974    13:53:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # seq 0 5
00:27:22.974   13:53:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5)
00:27:22.974   13:53:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode0
00:27:22.974  NQN:nqn.2016-06.io.spdk:cnode0 disconnected 1 controller(s)
00:27:22.974   13:53:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000000
00:27:22.974   13:53:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0
00:27:22.974   13:53:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:27:22.975   13:53:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000000
00:27:22.975   13:53:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:27:22.975   13:53:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000000
00:27:22.975   13:53:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0
00:27:22.975   13:53:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0
00:27:22.975   13:53:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:22.975   13:53:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x
00:27:22.975   13:53:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:22.975   13:53:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5)
00:27:22.975   13:53:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:27:23.233  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:27:23.233   13:53:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000001
00:27:23.233   13:53:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0
00:27:23.233   13:53:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:27:23.233   13:53:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000001
00:27:23.233   13:53:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000001
00:27:23.233   13:53:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:27:23.233   13:53:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0
00:27:23.233   13:53:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:27:23.233   13:53:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:23.233   13:53:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x
00:27:23.492   13:53:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:23.492   13:53:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5)
00:27:23.492   13:53:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2
00:27:24.426  NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s)
00:27:24.426   13:53:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000002
00:27:24.426   13:53:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0
00:27:24.426   13:53:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:27:24.426   13:53:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000002
00:27:24.426   13:53:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:27:24.426   13:53:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000002
00:27:24.426   13:53:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0
00:27:24.426   13:53:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2
00:27:24.426   13:53:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:24.426   13:53:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x
00:27:24.426   13:53:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:24.426   13:53:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5)
00:27:24.426   13:53:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3
00:27:25.361  NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s)
00:27:25.361   13:53:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000003
00:27:25.361   13:53:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0
00:27:25.361   13:53:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:27:25.361   13:53:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000003
00:27:25.361   13:53:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:27:25.361   13:53:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000003
00:27:25.361   13:53:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0
00:27:25.361   13:53:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3
00:27:25.361   13:53:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:25.361   13:53:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x
00:27:25.361   13:53:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:25.361   13:53:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5)
00:27:25.361   13:53:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4
00:27:26.293  NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s)
00:27:26.293   13:53:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000004
00:27:26.293   13:53:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0
00:27:26.293   13:53:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:27:26.293   13:53:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000004
00:27:26.293   13:53:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:27:26.293   13:53:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000004
00:27:26.293   13:53:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0
00:27:26.293   13:53:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4
00:27:26.293   13:53:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:26.293   13:53:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x
00:27:26.293   13:53:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:26.293   13:53:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5)
00:27:26.293   13:53:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5
00:27:27.227  NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s)
00:27:27.227   13:53:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000005
00:27:27.227   13:53:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0
00:27:27.227   13:53:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:27:27.227   13:53:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000005
00:27:27.227   13:53:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:27:27.227   13:53:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000005
00:27:27.227   13:53:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0
00:27:27.227   13:53:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5
00:27:27.227   13:53:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:27.227   13:53:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x
00:27:27.227   13:53:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:27.227   13:53:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@46 -- # trap - SIGINT SIGTERM EXIT
00:27:27.227   13:53:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@48 -- # nvmftestfini
00:27:27.227   13:53:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@516 -- # nvmfcleanup
00:27:27.227   13:53:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@121 -- # sync
00:27:27.485   13:53:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # '[' rdma == tcp ']'
00:27:27.485   13:53:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # '[' rdma == rdma ']'
00:27:27.485   13:53:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@124 -- # set +e
00:27:27.485   13:53:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@125 -- # for i in {1..20}
00:27:27.485   13:53:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma
00:27:27.485  rmmod nvme_rdma
00:27:27.485  rmmod nvme_fabrics
00:27:27.485   13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:27:27.485   13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@128 -- # set -e
00:27:27.485   13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@129 -- # return 0
00:27:27.485   13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@517 -- # '[' -n 3416907 ']'
00:27:27.485   13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@518 -- # killprocess 3416907
00:27:27.485   13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@954 -- # '[' -z 3416907 ']'
00:27:27.485   13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@958 -- # kill -0 3416907
00:27:27.485    13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@959 -- # uname
00:27:27.485   13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:27:27.485    13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3416907
00:27:27.485   13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:27:27.485   13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:27:27.485   13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3416907'
00:27:27.485  killing process with pid 3416907
00:27:27.485   13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@973 -- # kill 3416907
00:27:27.485   13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@978 -- # wait 3416907
00:27:30.017   13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:27:30.017   13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]]
00:27:30.017  
00:27:30.017  real	0m36.727s
00:27:30.017  user	2m7.220s
00:27:30.017  sys	0m17.512s
00:27:30.017   13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1130 -- # xtrace_disable
00:27:30.017   13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x
00:27:30.017  ************************************
00:27:30.017  END TEST nvmf_srq_overwhelm
00:27:30.017  ************************************
00:27:30.017   13:53:29 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma
00:27:30.017   13:53:29 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:27:30.017   13:53:29 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:27:30.017   13:53:29 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:27:30.017  ************************************
00:27:30.017  START TEST nvmf_shutdown
00:27:30.017  ************************************
00:27:30.017   13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma
00:27:30.017  * Looking for test storage...
00:27:30.017  * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target
00:27:30.017    13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:27:30.017     13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version
00:27:30.017     13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:27:30.017    13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:27:30.017    13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:27:30.018    13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l
00:27:30.018    13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l
00:27:30.018    13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-:
00:27:30.018    13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1
00:27:30.018    13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-:
00:27:30.018    13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2
00:27:30.018    13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<'
00:27:30.018    13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2
00:27:30.018    13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1
00:27:30.018    13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:27:30.018    13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in
00:27:30.018    13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1
00:27:30.018    13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 ))
00:27:30.018    13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:27:30.018     13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1
00:27:30.018     13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1
00:27:30.018     13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:27:30.018     13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1
00:27:30.018    13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1
00:27:30.018     13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2
00:27:30.018     13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2
00:27:30.018     13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:27:30.018     13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2
00:27:30.018    13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2
00:27:30.018    13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:27:30.018    13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:27:30.018    13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0
00:27:30.018    13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:27:30.018    13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:27:30.018  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:30.018  		--rc genhtml_branch_coverage=1
00:27:30.018  		--rc genhtml_function_coverage=1
00:27:30.018  		--rc genhtml_legend=1
00:27:30.018  		--rc geninfo_all_blocks=1
00:27:30.018  		--rc geninfo_unexecuted_blocks=1
00:27:30.018  		
00:27:30.018  		'
00:27:30.018    13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:27:30.018  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:30.018  		--rc genhtml_branch_coverage=1
00:27:30.018  		--rc genhtml_function_coverage=1
00:27:30.018  		--rc genhtml_legend=1
00:27:30.018  		--rc geninfo_all_blocks=1
00:27:30.018  		--rc geninfo_unexecuted_blocks=1
00:27:30.018  		
00:27:30.018  		'
00:27:30.018    13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:27:30.018  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:30.018  		--rc genhtml_branch_coverage=1
00:27:30.018  		--rc genhtml_function_coverage=1
00:27:30.018  		--rc genhtml_legend=1
00:27:30.018  		--rc geninfo_all_blocks=1
00:27:30.018  		--rc geninfo_unexecuted_blocks=1
00:27:30.018  		
00:27:30.018  		'
00:27:30.018    13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:27:30.018  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:30.018  		--rc genhtml_branch_coverage=1
00:27:30.018  		--rc genhtml_function_coverage=1
00:27:30.018  		--rc genhtml_legend=1
00:27:30.018  		--rc geninfo_all_blocks=1
00:27:30.018  		--rc geninfo_unexecuted_blocks=1
00:27:30.018  		
00:27:30.018  		'
00:27:30.018   13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh
00:27:30.018     13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s
00:27:30.018    13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:27:30.018    13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:27:30.018    13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:27:30.018    13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:27:30.018    13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:27:30.018    13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:27:30.018    13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:27:30.018    13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:27:30.018    13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:27:30.018     13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:27:30.277    13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:27:30.277    13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e
00:27:30.277    13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:27:30.277    13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:27:30.277    13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:27:30.277    13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:27:30.277    13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh
00:27:30.277     13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob
00:27:30.277     13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:27:30.277     13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:27:30.277     13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:27:30.277      13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:27:30.278      13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:27:30.278      13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:27:30.278      13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH
00:27:30.278      13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:27:30.278    13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0
00:27:30.278    13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:27:30.278    13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:27:30.278    13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:27:30.278    13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:27:30.278    13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:27:30.278    13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:27:30.278  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:27:30.278    13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:27:30.278    13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:27:30.278    13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0
00:27:30.278   13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64
00:27:30.278   13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512
00:27:30.278   13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1
00:27:30.278   13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:27:30.278   13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable
00:27:30.278   13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x
00:27:30.278  ************************************
00:27:30.278  START TEST nvmf_shutdown_tc1
00:27:30.278  ************************************
00:27:30.278   13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1
00:27:30.278   13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget
00:27:30.278   13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit
00:27:30.278   13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z rdma ']'
00:27:30.278   13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:27:30.278   13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs
00:27:30.278   13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no
00:27:30.278   13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns
00:27:30.278   13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:27:30.278   13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:27:30.278    13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:27:30.278   13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:27:30.278   13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:27:30.278   13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable
00:27:30.278   13:53:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=()
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=()
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=()
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=()
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=()
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=()
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=()
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]]
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}")
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}")
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]]
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}")
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)'
00:27:38.394  Found 0000:d9:00.0 (0x15b3 - 0x1015)
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)'
00:27:38.394  Found 0000:d9:00.1 (0x15b3 - 0x1015)
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]]
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0'
00:27:38.394  Found net devices under 0000:d9:00.0: mlx_0_0
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1'
00:27:38.394  Found net devices under 0000:d9:00.1: mlx_0_1
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]]
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]]
00:27:38.394   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # rdma_device_init
00:27:38.395   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@529 -- # load_ib_rdma_modules
00:27:38.395    13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # uname
00:27:38.395   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']'
00:27:38.395   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@66 -- # modprobe ib_cm
00:27:38.395   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@67 -- # modprobe ib_core
00:27:38.395   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@68 -- # modprobe ib_umad
00:27:38.395   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@69 -- # modprobe ib_uverbs
00:27:38.395   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@70 -- # modprobe iw_cm
00:27:38.395   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@71 -- # modprobe rdma_cm
00:27:38.395   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@72 -- # modprobe rdma_ucm
00:27:38.395   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@530 -- # allocate_nic_ips
00:27:38.395   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR ))
00:27:38.395    13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # get_rdma_if_list
00:27:38.395    13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:27:38.395    13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:27:38.395     13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:27:38.395     13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:27:38.395    13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:27:38.395    13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:27:38.395    13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:27:38.395    13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:27:38.395    13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_0
00:27:38.395    13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2
00:27:38.395    13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:27:38.395    13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:27:38.395    13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:27:38.395    13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:27:38.395    13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:27:38.395    13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_1
00:27:38.395    13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2
00:27:38.395   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:27:38.395    13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0
00:27:38.395    13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:27:38.395    13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:27:38.395    13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}'
00:27:38.395    13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1
00:27:38.395   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # ip=192.168.100.8
00:27:38.395   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]]
00:27:38.395   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0
00:27:38.395  6: mlx_0_0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:27:38.395      link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff
00:27:38.395      altname enp217s0f0np0
00:27:38.395      altname ens818f0np0
00:27:38.395      inet 192.168.100.8/24 scope global mlx_0_0
00:27:38.395         valid_lft forever preferred_lft forever
00:27:38.395   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:27:38.395    13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1
00:27:38.395    13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:27:38.395    13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:27:38.395    13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}'
00:27:38.395    13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1
00:27:38.395   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # ip=192.168.100.9
00:27:38.395   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]]
00:27:38.395   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1
00:27:38.395  7: mlx_0_1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:27:38.395      link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff
00:27:38.395      altname enp217s0f1np1
00:27:38.395      altname ens818f1np1
00:27:38.395      inet 192.168.100.9/24 scope global mlx_0_1
00:27:38.395         valid_lft forever preferred_lft forever
00:27:38.395   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0
00:27:38.395   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:27:38.395   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma'
00:27:38.395   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]]
00:27:38.395    13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # get_available_rdma_ips
00:27:38.395     13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # get_rdma_if_list
00:27:38.395     13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:27:38.395     13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:27:38.395      13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:27:38.395      13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:27:38.395     13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:27:38.395     13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:27:38.395     13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:27:38.395     13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:27:38.395     13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_0
00:27:38.395     13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2
00:27:38.395     13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:27:38.395     13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:27:38.395     13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:27:38.395     13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:27:38.395     13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:27:38.395     13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_1
00:27:38.395     13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2
00:27:38.395    13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:27:38.395    13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0
00:27:38.395    13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:27:38.395    13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:27:38.395    13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}'
00:27:38.395    13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1
00:27:38.395    13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:27:38.395    13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1
00:27:38.395    13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:27:38.395    13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:27:38.395    13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}'
00:27:38.395    13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1
00:27:38.395   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8
00:27:38.395  192.168.100.9'
00:27:38.395    13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # echo '192.168.100.8
00:27:38.395  192.168.100.9'
00:27:38.395    13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # head -n 1
00:27:38.395   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8
00:27:38.395    13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # echo '192.168.100.8
00:27:38.395  192.168.100.9'
00:27:38.395    13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # tail -n +2
00:27:38.395    13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # head -n 1
00:27:38.395   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9
00:27:38.395   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']'
00:27:38.395   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024'
00:27:38.395   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']'
00:27:38.396   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']'
00:27:38.396   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-rdma
00:27:38.396   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E
00:27:38.396   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:27:38.396   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable
00:27:38.396   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x
00:27:38.396   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=3425298
00:27:38.396   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 3425298
00:27:38.396   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E
00:27:38.396   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3425298 ']'
00:27:38.396   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:27:38.396   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100
00:27:38.396   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:27:38.396  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:27:38.396   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable
00:27:38.396   13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x
00:27:38.396  [2024-12-14 13:53:37.000888] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:27:38.396  [2024-12-14 13:53:37.001008] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:27:38.396  [2024-12-14 13:53:37.129747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:27:38.396  [2024-12-14 13:53:37.229357] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:27:38.396  [2024-12-14 13:53:37.229409] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:27:38.396  [2024-12-14 13:53:37.229422] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:27:38.396  [2024-12-14 13:53:37.229436] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:27:38.396  [2024-12-14 13:53:37.229446] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:27:38.396  [2024-12-14 13:53:37.231953] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:27:38.396  [2024-12-14 13:53:37.231982] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:27:38.396  [2024-12-14 13:53:37.232070] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:27:38.396  [2024-12-14 13:53:37.232094] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4
00:27:38.396   13:53:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:27:38.396   13:53:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0
00:27:38.396   13:53:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:27:38.396   13:53:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable
00:27:38.396   13:53:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x
00:27:38.396   13:53:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:27:38.396   13:53:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192
00:27:38.396   13:53:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:38.396   13:53:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x
00:27:38.396  [2024-12-14 13:53:37.910687] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000286c0/0x7f0ecc9bd940) succeed.
00:27:38.396  [2024-12-14 13:53:37.920723] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028840/0x7f0ecc979940) succeed.
00:27:38.655   13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:38.655   13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10})
00:27:38.655   13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems
00:27:38.655   13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable
00:27:38.655   13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x
00:27:38.655   13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt
00:27:38.655   13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:27:38.655   13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat
00:27:38.655   13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:27:38.655   13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat
00:27:38.655   13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:27:38.655   13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat
00:27:38.655   13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:27:38.655   13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat
00:27:38.655   13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:27:38.655   13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat
00:27:38.655   13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:27:38.655   13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat
00:27:38.655   13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:27:38.655   13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat
00:27:38.655   13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:27:38.655   13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat
00:27:38.655   13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:27:38.655   13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat
00:27:38.655   13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:27:38.655   13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat
00:27:38.655   13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd
00:27:38.655   13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:38.655   13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x
00:27:38.655  Malloc1
00:27:38.655  [2024-12-14 13:53:38.332794] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 ***
00:27:38.913  Malloc2
00:27:38.913  Malloc3
00:27:38.913  Malloc4
00:27:39.172  Malloc5
00:27:39.172  Malloc6
00:27:39.172  Malloc7
00:27:39.430  Malloc8
00:27:39.430  Malloc9
00:27:39.430  Malloc10
00:27:39.688   13:53:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:39.688   13:53:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems
00:27:39.688   13:53:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable
00:27:39.688   13:53:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x
00:27:39.688   13:53:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3425736
00:27:39.689   13:53:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3425736 /var/tmp/bdevperf.sock
00:27:39.689   13:53:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3425736 ']'
00:27:39.689   13:53:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:27:39.689   13:53:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100
00:27:39.689   13:53:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63
00:27:39.689    13:53:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10
00:27:39.689   13:53:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:27:39.689  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:27:39.689   13:53:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable
00:27:39.689    13:53:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=()
00:27:39.689   13:53:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x
00:27:39.689    13:53:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config
00:27:39.689    13:53:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:27:39.689    13:53:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:27:39.689  {
00:27:39.689    "params": {
00:27:39.689      "name": "Nvme$subsystem",
00:27:39.689      "trtype": "$TEST_TRANSPORT",
00:27:39.689      "traddr": "$NVMF_FIRST_TARGET_IP",
00:27:39.689      "adrfam": "ipv4",
00:27:39.689      "trsvcid": "$NVMF_PORT",
00:27:39.689      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:27:39.689      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:27:39.689      "hdgst": ${hdgst:-false},
00:27:39.689      "ddgst": ${ddgst:-false}
00:27:39.689    },
00:27:39.689    "method": "bdev_nvme_attach_controller"
00:27:39.689  }
00:27:39.689  EOF
00:27:39.689  )")
00:27:39.689     13:53:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat
00:27:39.689    13:53:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:27:39.689    13:53:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:27:39.689  {
00:27:39.689    "params": {
00:27:39.689      "name": "Nvme$subsystem",
00:27:39.689      "trtype": "$TEST_TRANSPORT",
00:27:39.689      "traddr": "$NVMF_FIRST_TARGET_IP",
00:27:39.689      "adrfam": "ipv4",
00:27:39.689      "trsvcid": "$NVMF_PORT",
00:27:39.689      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:27:39.689      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:27:39.689      "hdgst": ${hdgst:-false},
00:27:39.689      "ddgst": ${ddgst:-false}
00:27:39.689    },
00:27:39.689    "method": "bdev_nvme_attach_controller"
00:27:39.689  }
00:27:39.689  EOF
00:27:39.689  )")
00:27:39.689     13:53:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat
00:27:39.689    13:53:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:27:39.689    13:53:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:27:39.689  {
00:27:39.689    "params": {
00:27:39.689      "name": "Nvme$subsystem",
00:27:39.689      "trtype": "$TEST_TRANSPORT",
00:27:39.689      "traddr": "$NVMF_FIRST_TARGET_IP",
00:27:39.689      "adrfam": "ipv4",
00:27:39.689      "trsvcid": "$NVMF_PORT",
00:27:39.689      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:27:39.689      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:27:39.689      "hdgst": ${hdgst:-false},
00:27:39.689      "ddgst": ${ddgst:-false}
00:27:39.689    },
00:27:39.689    "method": "bdev_nvme_attach_controller"
00:27:39.689  }
00:27:39.689  EOF
00:27:39.689  )")
00:27:39.689     13:53:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat
00:27:39.689    13:53:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:27:39.689    13:53:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:27:39.689  {
00:27:39.689    "params": {
00:27:39.689      "name": "Nvme$subsystem",
00:27:39.689      "trtype": "$TEST_TRANSPORT",
00:27:39.689      "traddr": "$NVMF_FIRST_TARGET_IP",
00:27:39.689      "adrfam": "ipv4",
00:27:39.689      "trsvcid": "$NVMF_PORT",
00:27:39.689      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:27:39.689      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:27:39.689      "hdgst": ${hdgst:-false},
00:27:39.689      "ddgst": ${ddgst:-false}
00:27:39.689    },
00:27:39.689    "method": "bdev_nvme_attach_controller"
00:27:39.689  }
00:27:39.689  EOF
00:27:39.689  )")
00:27:39.689     13:53:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat
00:27:39.689    13:53:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:27:39.689    13:53:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:27:39.689  {
00:27:39.689    "params": {
00:27:39.689      "name": "Nvme$subsystem",
00:27:39.689      "trtype": "$TEST_TRANSPORT",
00:27:39.689      "traddr": "$NVMF_FIRST_TARGET_IP",
00:27:39.689      "adrfam": "ipv4",
00:27:39.689      "trsvcid": "$NVMF_PORT",
00:27:39.689      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:27:39.689      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:27:39.689      "hdgst": ${hdgst:-false},
00:27:39.689      "ddgst": ${ddgst:-false}
00:27:39.689    },
00:27:39.689    "method": "bdev_nvme_attach_controller"
00:27:39.689  }
00:27:39.689  EOF
00:27:39.689  )")
00:27:39.689     13:53:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat
00:27:39.689    13:53:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:27:39.689    13:53:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:27:39.689  {
00:27:39.689    "params": {
00:27:39.689      "name": "Nvme$subsystem",
00:27:39.689      "trtype": "$TEST_TRANSPORT",
00:27:39.689      "traddr": "$NVMF_FIRST_TARGET_IP",
00:27:39.689      "adrfam": "ipv4",
00:27:39.689      "trsvcid": "$NVMF_PORT",
00:27:39.689      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:27:39.689      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:27:39.689      "hdgst": ${hdgst:-false},
00:27:39.689      "ddgst": ${ddgst:-false}
00:27:39.689    },
00:27:39.689    "method": "bdev_nvme_attach_controller"
00:27:39.689  }
00:27:39.689  EOF
00:27:39.689  )")
00:27:39.689     13:53:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat
00:27:39.689    13:53:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:27:39.689    13:53:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:27:39.689  {
00:27:39.689    "params": {
00:27:39.689      "name": "Nvme$subsystem",
00:27:39.689      "trtype": "$TEST_TRANSPORT",
00:27:39.689      "traddr": "$NVMF_FIRST_TARGET_IP",
00:27:39.689      "adrfam": "ipv4",
00:27:39.689      "trsvcid": "$NVMF_PORT",
00:27:39.689      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:27:39.689      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:27:39.689      "hdgst": ${hdgst:-false},
00:27:39.689      "ddgst": ${ddgst:-false}
00:27:39.689    },
00:27:39.689    "method": "bdev_nvme_attach_controller"
00:27:39.689  }
00:27:39.689  EOF
00:27:39.689  )")
00:27:39.689     13:53:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat
00:27:39.689    13:53:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:27:39.689    13:53:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:27:39.689  {
00:27:39.689    "params": {
00:27:39.689      "name": "Nvme$subsystem",
00:27:39.689      "trtype": "$TEST_TRANSPORT",
00:27:39.689      "traddr": "$NVMF_FIRST_TARGET_IP",
00:27:39.689      "adrfam": "ipv4",
00:27:39.689      "trsvcid": "$NVMF_PORT",
00:27:39.689      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:27:39.689      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:27:39.689      "hdgst": ${hdgst:-false},
00:27:39.689      "ddgst": ${ddgst:-false}
00:27:39.689    },
00:27:39.689    "method": "bdev_nvme_attach_controller"
00:27:39.689  }
00:27:39.689  EOF
00:27:39.689  )")
00:27:39.689     13:53:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat
00:27:39.689    13:53:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:27:39.689    13:53:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:27:39.689  {
00:27:39.689    "params": {
00:27:39.689      "name": "Nvme$subsystem",
00:27:39.689      "trtype": "$TEST_TRANSPORT",
00:27:39.689      "traddr": "$NVMF_FIRST_TARGET_IP",
00:27:39.689      "adrfam": "ipv4",
00:27:39.689      "trsvcid": "$NVMF_PORT",
00:27:39.689      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:27:39.689      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:27:39.689      "hdgst": ${hdgst:-false},
00:27:39.689      "ddgst": ${ddgst:-false}
00:27:39.689    },
00:27:39.689    "method": "bdev_nvme_attach_controller"
00:27:39.689  }
00:27:39.689  EOF
00:27:39.689  )")
00:27:39.689     13:53:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat
00:27:39.689    13:53:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:27:39.689    13:53:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:27:39.689  {
00:27:39.689    "params": {
00:27:39.689      "name": "Nvme$subsystem",
00:27:39.689      "trtype": "$TEST_TRANSPORT",
00:27:39.689      "traddr": "$NVMF_FIRST_TARGET_IP",
00:27:39.689      "adrfam": "ipv4",
00:27:39.689      "trsvcid": "$NVMF_PORT",
00:27:39.689      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:27:39.689      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:27:39.689      "hdgst": ${hdgst:-false},
00:27:39.689      "ddgst": ${ddgst:-false}
00:27:39.689    },
00:27:39.689    "method": "bdev_nvme_attach_controller"
00:27:39.690  }
00:27:39.690  EOF
00:27:39.690  )")
00:27:39.690  [2024-12-14 13:53:39.303243] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:27:39.690  [2024-12-14 13:53:39.303338] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ]
00:27:39.690     13:53:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat
00:27:39.690    13:53:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq .
00:27:39.690     13:53:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=,
00:27:39.690     13:53:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:27:39.690    "params": {
00:27:39.690      "name": "Nvme1",
00:27:39.690      "trtype": "rdma",
00:27:39.690      "traddr": "192.168.100.8",
00:27:39.690      "adrfam": "ipv4",
00:27:39.690      "trsvcid": "4420",
00:27:39.690      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:27:39.690      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:27:39.690      "hdgst": false,
00:27:39.690      "ddgst": false
00:27:39.690    },
00:27:39.690    "method": "bdev_nvme_attach_controller"
00:27:39.690  },{
00:27:39.690    "params": {
00:27:39.690      "name": "Nvme2",
00:27:39.690      "trtype": "rdma",
00:27:39.690      "traddr": "192.168.100.8",
00:27:39.690      "adrfam": "ipv4",
00:27:39.690      "trsvcid": "4420",
00:27:39.690      "subnqn": "nqn.2016-06.io.spdk:cnode2",
00:27:39.690      "hostnqn": "nqn.2016-06.io.spdk:host2",
00:27:39.690      "hdgst": false,
00:27:39.690      "ddgst": false
00:27:39.690    },
00:27:39.690    "method": "bdev_nvme_attach_controller"
00:27:39.690  },{
00:27:39.690    "params": {
00:27:39.690      "name": "Nvme3",
00:27:39.690      "trtype": "rdma",
00:27:39.690      "traddr": "192.168.100.8",
00:27:39.690      "adrfam": "ipv4",
00:27:39.690      "trsvcid": "4420",
00:27:39.690      "subnqn": "nqn.2016-06.io.spdk:cnode3",
00:27:39.690      "hostnqn": "nqn.2016-06.io.spdk:host3",
00:27:39.690      "hdgst": false,
00:27:39.690      "ddgst": false
00:27:39.690    },
00:27:39.690    "method": "bdev_nvme_attach_controller"
00:27:39.690  },{
00:27:39.690    "params": {
00:27:39.690      "name": "Nvme4",
00:27:39.690      "trtype": "rdma",
00:27:39.690      "traddr": "192.168.100.8",
00:27:39.690      "adrfam": "ipv4",
00:27:39.690      "trsvcid": "4420",
00:27:39.690      "subnqn": "nqn.2016-06.io.spdk:cnode4",
00:27:39.690      "hostnqn": "nqn.2016-06.io.spdk:host4",
00:27:39.690      "hdgst": false,
00:27:39.690      "ddgst": false
00:27:39.690    },
00:27:39.690    "method": "bdev_nvme_attach_controller"
00:27:39.690  },{
00:27:39.690    "params": {
00:27:39.690      "name": "Nvme5",
00:27:39.690      "trtype": "rdma",
00:27:39.690      "traddr": "192.168.100.8",
00:27:39.690      "adrfam": "ipv4",
00:27:39.690      "trsvcid": "4420",
00:27:39.690      "subnqn": "nqn.2016-06.io.spdk:cnode5",
00:27:39.690      "hostnqn": "nqn.2016-06.io.spdk:host5",
00:27:39.690      "hdgst": false,
00:27:39.690      "ddgst": false
00:27:39.690    },
00:27:39.690    "method": "bdev_nvme_attach_controller"
00:27:39.690  },{
00:27:39.690    "params": {
00:27:39.690      "name": "Nvme6",
00:27:39.690      "trtype": "rdma",
00:27:39.690      "traddr": "192.168.100.8",
00:27:39.690      "adrfam": "ipv4",
00:27:39.690      "trsvcid": "4420",
00:27:39.690      "subnqn": "nqn.2016-06.io.spdk:cnode6",
00:27:39.690      "hostnqn": "nqn.2016-06.io.spdk:host6",
00:27:39.690      "hdgst": false,
00:27:39.690      "ddgst": false
00:27:39.690    },
00:27:39.690    "method": "bdev_nvme_attach_controller"
00:27:39.690  },{
00:27:39.690    "params": {
00:27:39.690      "name": "Nvme7",
00:27:39.690      "trtype": "rdma",
00:27:39.690      "traddr": "192.168.100.8",
00:27:39.690      "adrfam": "ipv4",
00:27:39.690      "trsvcid": "4420",
00:27:39.690      "subnqn": "nqn.2016-06.io.spdk:cnode7",
00:27:39.690      "hostnqn": "nqn.2016-06.io.spdk:host7",
00:27:39.690      "hdgst": false,
00:27:39.690      "ddgst": false
00:27:39.690    },
00:27:39.690    "method": "bdev_nvme_attach_controller"
00:27:39.690  },{
00:27:39.690    "params": {
00:27:39.690      "name": "Nvme8",
00:27:39.690      "trtype": "rdma",
00:27:39.690      "traddr": "192.168.100.8",
00:27:39.690      "adrfam": "ipv4",
00:27:39.690      "trsvcid": "4420",
00:27:39.690      "subnqn": "nqn.2016-06.io.spdk:cnode8",
00:27:39.690      "hostnqn": "nqn.2016-06.io.spdk:host8",
00:27:39.690      "hdgst": false,
00:27:39.690      "ddgst": false
00:27:39.690    },
00:27:39.690    "method": "bdev_nvme_attach_controller"
00:27:39.690  },{
00:27:39.690    "params": {
00:27:39.690      "name": "Nvme9",
00:27:39.690      "trtype": "rdma",
00:27:39.690      "traddr": "192.168.100.8",
00:27:39.690      "adrfam": "ipv4",
00:27:39.690      "trsvcid": "4420",
00:27:39.690      "subnqn": "nqn.2016-06.io.spdk:cnode9",
00:27:39.690      "hostnqn": "nqn.2016-06.io.spdk:host9",
00:27:39.690      "hdgst": false,
00:27:39.690      "ddgst": false
00:27:39.690    },
00:27:39.690    "method": "bdev_nvme_attach_controller"
00:27:39.690  },{
00:27:39.690    "params": {
00:27:39.690      "name": "Nvme10",
00:27:39.690      "trtype": "rdma",
00:27:39.690      "traddr": "192.168.100.8",
00:27:39.690      "adrfam": "ipv4",
00:27:39.690      "trsvcid": "4420",
00:27:39.690      "subnqn": "nqn.2016-06.io.spdk:cnode10",
00:27:39.690      "hostnqn": "nqn.2016-06.io.spdk:host10",
00:27:39.690      "hdgst": false,
00:27:39.690      "ddgst": false
00:27:39.690    },
00:27:39.690    "method": "bdev_nvme_attach_controller"
00:27:39.690  }'
00:27:39.948  [2024-12-14 13:53:39.440902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:27:39.948  [2024-12-14 13:53:39.546164] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:27:41.323   13:53:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:27:41.323   13:53:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0
00:27:41.323   13:53:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init
00:27:41.323   13:53:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:41.323   13:53:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x
00:27:41.323   13:53:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:41.323   13:53:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3425736
00:27:41.323   13:53:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1
00:27:41.323   13:53:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1
00:27:42.260  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3425736 Killed                  $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}")
00:27:42.260   13:53:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3425298
00:27:42.260   13:53:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1
00:27:42.260    13:53:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10
00:27:42.260    13:53:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=()
00:27:42.260    13:53:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config
00:27:42.260    13:53:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:27:42.260    13:53:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:27:42.260  {
00:27:42.260    "params": {
00:27:42.260      "name": "Nvme$subsystem",
00:27:42.260      "trtype": "$TEST_TRANSPORT",
00:27:42.260      "traddr": "$NVMF_FIRST_TARGET_IP",
00:27:42.260      "adrfam": "ipv4",
00:27:42.260      "trsvcid": "$NVMF_PORT",
00:27:42.260      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:27:42.260      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:27:42.260      "hdgst": ${hdgst:-false},
00:27:42.260      "ddgst": ${ddgst:-false}
00:27:42.260    },
00:27:42.260    "method": "bdev_nvme_attach_controller"
00:27:42.260  }
00:27:42.260  EOF
00:27:42.260  )")
00:27:42.260     13:53:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat
00:27:42.260    13:53:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:27:42.260    13:53:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:27:42.260  {
00:27:42.260    "params": {
00:27:42.260      "name": "Nvme$subsystem",
00:27:42.260      "trtype": "$TEST_TRANSPORT",
00:27:42.260      "traddr": "$NVMF_FIRST_TARGET_IP",
00:27:42.260      "adrfam": "ipv4",
00:27:42.260      "trsvcid": "$NVMF_PORT",
00:27:42.260      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:27:42.260      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:27:42.260      "hdgst": ${hdgst:-false},
00:27:42.260      "ddgst": ${ddgst:-false}
00:27:42.260    },
00:27:42.260    "method": "bdev_nvme_attach_controller"
00:27:42.260  }
00:27:42.260  EOF
00:27:42.260  )")
00:27:42.260     13:53:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat
00:27:42.260    13:53:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:27:42.260    13:53:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:27:42.260  {
00:27:42.260    "params": {
00:27:42.260      "name": "Nvme$subsystem",
00:27:42.260      "trtype": "$TEST_TRANSPORT",
00:27:42.260      "traddr": "$NVMF_FIRST_TARGET_IP",
00:27:42.260      "adrfam": "ipv4",
00:27:42.260      "trsvcid": "$NVMF_PORT",
00:27:42.260      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:27:42.260      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:27:42.260      "hdgst": ${hdgst:-false},
00:27:42.260      "ddgst": ${ddgst:-false}
00:27:42.260    },
00:27:42.260    "method": "bdev_nvme_attach_controller"
00:27:42.260  }
00:27:42.260  EOF
00:27:42.260  )")
00:27:42.260     13:53:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat
00:27:42.260    13:53:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:27:42.260    13:53:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:27:42.260  {
00:27:42.260    "params": {
00:27:42.260      "name": "Nvme$subsystem",
00:27:42.260      "trtype": "$TEST_TRANSPORT",
00:27:42.260      "traddr": "$NVMF_FIRST_TARGET_IP",
00:27:42.260      "adrfam": "ipv4",
00:27:42.260      "trsvcid": "$NVMF_PORT",
00:27:42.260      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:27:42.260      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:27:42.260      "hdgst": ${hdgst:-false},
00:27:42.260      "ddgst": ${ddgst:-false}
00:27:42.260    },
00:27:42.260    "method": "bdev_nvme_attach_controller"
00:27:42.260  }
00:27:42.260  EOF
00:27:42.260  )")
00:27:42.260     13:53:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat
00:27:42.260    13:53:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:27:42.260    13:53:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:27:42.260  {
00:27:42.260    "params": {
00:27:42.260      "name": "Nvme$subsystem",
00:27:42.260      "trtype": "$TEST_TRANSPORT",
00:27:42.260      "traddr": "$NVMF_FIRST_TARGET_IP",
00:27:42.260      "adrfam": "ipv4",
00:27:42.260      "trsvcid": "$NVMF_PORT",
00:27:42.260      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:27:42.260      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:27:42.260      "hdgst": ${hdgst:-false},
00:27:42.260      "ddgst": ${ddgst:-false}
00:27:42.260    },
00:27:42.260    "method": "bdev_nvme_attach_controller"
00:27:42.260  }
00:27:42.260  EOF
00:27:42.260  )")
00:27:42.260     13:53:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat
00:27:42.260    13:53:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:27:42.260    13:53:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:27:42.260  {
00:27:42.260    "params": {
00:27:42.260      "name": "Nvme$subsystem",
00:27:42.260      "trtype": "$TEST_TRANSPORT",
00:27:42.260      "traddr": "$NVMF_FIRST_TARGET_IP",
00:27:42.260      "adrfam": "ipv4",
00:27:42.260      "trsvcid": "$NVMF_PORT",
00:27:42.260      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:27:42.260      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:27:42.260      "hdgst": ${hdgst:-false},
00:27:42.260      "ddgst": ${ddgst:-false}
00:27:42.260    },
00:27:42.260    "method": "bdev_nvme_attach_controller"
00:27:42.260  }
00:27:42.260  EOF
00:27:42.260  )")
00:27:42.260     13:53:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat
00:27:42.260    13:53:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:27:42.260    13:53:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:27:42.260  {
00:27:42.260    "params": {
00:27:42.260      "name": "Nvme$subsystem",
00:27:42.260      "trtype": "$TEST_TRANSPORT",
00:27:42.260      "traddr": "$NVMF_FIRST_TARGET_IP",
00:27:42.260      "adrfam": "ipv4",
00:27:42.260      "trsvcid": "$NVMF_PORT",
00:27:42.260      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:27:42.260      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:27:42.260      "hdgst": ${hdgst:-false},
00:27:42.260      "ddgst": ${ddgst:-false}
00:27:42.260    },
00:27:42.260    "method": "bdev_nvme_attach_controller"
00:27:42.260  }
00:27:42.260  EOF
00:27:42.260  )")
00:27:42.260     13:53:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat
00:27:42.260    13:53:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:27:42.260    13:53:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:27:42.260  {
00:27:42.260    "params": {
00:27:42.260      "name": "Nvme$subsystem",
00:27:42.260      "trtype": "$TEST_TRANSPORT",
00:27:42.260      "traddr": "$NVMF_FIRST_TARGET_IP",
00:27:42.260      "adrfam": "ipv4",
00:27:42.260      "trsvcid": "$NVMF_PORT",
00:27:42.260      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:27:42.260      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:27:42.260      "hdgst": ${hdgst:-false},
00:27:42.260      "ddgst": ${ddgst:-false}
00:27:42.260    },
00:27:42.260    "method": "bdev_nvme_attach_controller"
00:27:42.260  }
00:27:42.260  EOF
00:27:42.260  )")
00:27:42.260     13:53:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat
00:27:42.260    13:53:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:27:42.260    13:53:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:27:42.260  {
00:27:42.260    "params": {
00:27:42.260      "name": "Nvme$subsystem",
00:27:42.260      "trtype": "$TEST_TRANSPORT",
00:27:42.260      "traddr": "$NVMF_FIRST_TARGET_IP",
00:27:42.260      "adrfam": "ipv4",
00:27:42.260      "trsvcid": "$NVMF_PORT",
00:27:42.260      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:27:42.260      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:27:42.260      "hdgst": ${hdgst:-false},
00:27:42.260      "ddgst": ${ddgst:-false}
00:27:42.260    },
00:27:42.260    "method": "bdev_nvme_attach_controller"
00:27:42.260  }
00:27:42.260  EOF
00:27:42.260  )")
00:27:42.260     13:53:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat
00:27:42.260    13:53:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:27:42.260    13:53:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:27:42.260  {
00:27:42.260    "params": {
00:27:42.260      "name": "Nvme$subsystem",
00:27:42.260      "trtype": "$TEST_TRANSPORT",
00:27:42.260      "traddr": "$NVMF_FIRST_TARGET_IP",
00:27:42.261      "adrfam": "ipv4",
00:27:42.261      "trsvcid": "$NVMF_PORT",
00:27:42.261      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:27:42.261      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:27:42.261      "hdgst": ${hdgst:-false},
00:27:42.261      "ddgst": ${ddgst:-false}
00:27:42.261    },
00:27:42.261    "method": "bdev_nvme_attach_controller"
00:27:42.261  }
00:27:42.261  EOF
00:27:42.261  )")
00:27:42.261     13:53:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat
00:27:42.261  [2024-12-14 13:53:41.733828] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:27:42.261  [2024-12-14 13:53:41.733914] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3426129 ]
00:27:42.261    13:53:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq .
00:27:42.261     13:53:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=,
00:27:42.261     13:53:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:27:42.261    "params": {
00:27:42.261      "name": "Nvme1",
00:27:42.261      "trtype": "rdma",
00:27:42.261      "traddr": "192.168.100.8",
00:27:42.261      "adrfam": "ipv4",
00:27:42.261      "trsvcid": "4420",
00:27:42.261      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:27:42.261      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:27:42.261      "hdgst": false,
00:27:42.261      "ddgst": false
00:27:42.261    },
00:27:42.261    "method": "bdev_nvme_attach_controller"
00:27:42.261  },{
00:27:42.261    "params": {
00:27:42.261      "name": "Nvme2",
00:27:42.261      "trtype": "rdma",
00:27:42.261      "traddr": "192.168.100.8",
00:27:42.261      "adrfam": "ipv4",
00:27:42.261      "trsvcid": "4420",
00:27:42.261      "subnqn": "nqn.2016-06.io.spdk:cnode2",
00:27:42.261      "hostnqn": "nqn.2016-06.io.spdk:host2",
00:27:42.261      "hdgst": false,
00:27:42.261      "ddgst": false
00:27:42.261    },
00:27:42.261    "method": "bdev_nvme_attach_controller"
00:27:42.261  },{
00:27:42.261    "params": {
00:27:42.261      "name": "Nvme3",
00:27:42.261      "trtype": "rdma",
00:27:42.261      "traddr": "192.168.100.8",
00:27:42.261      "adrfam": "ipv4",
00:27:42.261      "trsvcid": "4420",
00:27:42.261      "subnqn": "nqn.2016-06.io.spdk:cnode3",
00:27:42.261      "hostnqn": "nqn.2016-06.io.spdk:host3",
00:27:42.261      "hdgst": false,
00:27:42.261      "ddgst": false
00:27:42.261    },
00:27:42.261    "method": "bdev_nvme_attach_controller"
00:27:42.261  },{
00:27:42.261    "params": {
00:27:42.261      "name": "Nvme4",
00:27:42.261      "trtype": "rdma",
00:27:42.261      "traddr": "192.168.100.8",
00:27:42.261      "adrfam": "ipv4",
00:27:42.261      "trsvcid": "4420",
00:27:42.261      "subnqn": "nqn.2016-06.io.spdk:cnode4",
00:27:42.261      "hostnqn": "nqn.2016-06.io.spdk:host4",
00:27:42.261      "hdgst": false,
00:27:42.261      "ddgst": false
00:27:42.261    },
00:27:42.261    "method": "bdev_nvme_attach_controller"
00:27:42.261  },{
00:27:42.261    "params": {
00:27:42.261      "name": "Nvme5",
00:27:42.261      "trtype": "rdma",
00:27:42.261      "traddr": "192.168.100.8",
00:27:42.261      "adrfam": "ipv4",
00:27:42.261      "trsvcid": "4420",
00:27:42.261      "subnqn": "nqn.2016-06.io.spdk:cnode5",
00:27:42.261      "hostnqn": "nqn.2016-06.io.spdk:host5",
00:27:42.261      "hdgst": false,
00:27:42.261      "ddgst": false
00:27:42.261    },
00:27:42.261    "method": "bdev_nvme_attach_controller"
00:27:42.261  },{
00:27:42.261    "params": {
00:27:42.261      "name": "Nvme6",
00:27:42.261      "trtype": "rdma",
00:27:42.261      "traddr": "192.168.100.8",
00:27:42.261      "adrfam": "ipv4",
00:27:42.261      "trsvcid": "4420",
00:27:42.261      "subnqn": "nqn.2016-06.io.spdk:cnode6",
00:27:42.261      "hostnqn": "nqn.2016-06.io.spdk:host6",
00:27:42.261      "hdgst": false,
00:27:42.261      "ddgst": false
00:27:42.261    },
00:27:42.261    "method": "bdev_nvme_attach_controller"
00:27:42.261  },{
00:27:42.261    "params": {
00:27:42.261      "name": "Nvme7",
00:27:42.261      "trtype": "rdma",
00:27:42.261      "traddr": "192.168.100.8",
00:27:42.261      "adrfam": "ipv4",
00:27:42.261      "trsvcid": "4420",
00:27:42.261      "subnqn": "nqn.2016-06.io.spdk:cnode7",
00:27:42.261      "hostnqn": "nqn.2016-06.io.spdk:host7",
00:27:42.261      "hdgst": false,
00:27:42.261      "ddgst": false
00:27:42.261    },
00:27:42.261    "method": "bdev_nvme_attach_controller"
00:27:42.261  },{
00:27:42.261    "params": {
00:27:42.261      "name": "Nvme8",
00:27:42.261      "trtype": "rdma",
00:27:42.261      "traddr": "192.168.100.8",
00:27:42.261      "adrfam": "ipv4",
00:27:42.261      "trsvcid": "4420",
00:27:42.261      "subnqn": "nqn.2016-06.io.spdk:cnode8",
00:27:42.261      "hostnqn": "nqn.2016-06.io.spdk:host8",
00:27:42.261      "hdgst": false,
00:27:42.261      "ddgst": false
00:27:42.261    },
00:27:42.261    "method": "bdev_nvme_attach_controller"
00:27:42.261  },{
00:27:42.261    "params": {
00:27:42.261      "name": "Nvme9",
00:27:42.261      "trtype": "rdma",
00:27:42.261      "traddr": "192.168.100.8",
00:27:42.261      "adrfam": "ipv4",
00:27:42.261      "trsvcid": "4420",
00:27:42.261      "subnqn": "nqn.2016-06.io.spdk:cnode9",
00:27:42.261      "hostnqn": "nqn.2016-06.io.spdk:host9",
00:27:42.261      "hdgst": false,
00:27:42.261      "ddgst": false
00:27:42.261    },
00:27:42.261    "method": "bdev_nvme_attach_controller"
00:27:42.261  },{
00:27:42.261    "params": {
00:27:42.261      "name": "Nvme10",
00:27:42.261      "trtype": "rdma",
00:27:42.261      "traddr": "192.168.100.8",
00:27:42.261      "adrfam": "ipv4",
00:27:42.261      "trsvcid": "4420",
00:27:42.261      "subnqn": "nqn.2016-06.io.spdk:cnode10",
00:27:42.261      "hostnqn": "nqn.2016-06.io.spdk:host10",
00:27:42.261      "hdgst": false,
00:27:42.261      "ddgst": false
00:27:42.261    },
00:27:42.261    "method": "bdev_nvme_attach_controller"
00:27:42.261  }'
00:27:42.261  [2024-12-14 13:53:41.871261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:27:42.261  [2024-12-14 13:53:41.981741] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:27:43.637  Running I/O for 1 seconds...
00:27:44.830       3137.00 IOPS,   196.06 MiB/s
00:27:44.830                                                                                                  Latency(us)
00:27:44.830  
[2024-12-14T12:53:44.568Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:27:44.830  Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:27:44.830  	 Verification LBA range: start 0x0 length 0x400
00:27:44.830  	 Nvme1n1             :       1.19     338.28      21.14       0.00     0.00  185687.39    9384.76  260046.85
00:27:44.830  Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:27:44.830  	 Verification LBA range: start 0x0 length 0x400
00:27:44.830  	 Nvme2n1             :       1.19     348.76      21.80       0.00     0.00  177405.20   13526.63  181193.93
00:27:44.830  Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:27:44.830  	 Verification LBA range: start 0x0 length 0x400
00:27:44.830  	 Nvme3n1             :       1.19     350.85      21.93       0.00     0.00  173862.32    4875.88  174483.05
00:27:44.830  Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:27:44.831  	 Verification LBA range: start 0x0 length 0x400
00:27:44.831  	 Nvme4n1             :       1.20     356.27      22.27       0.00     0.00  168772.18    6239.03  162739.00
00:27:44.831  Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:27:44.831  	 Verification LBA range: start 0x0 length 0x400
00:27:44.831  	 Nvme5n1             :       1.20     334.23      20.89       0.00     0.00  176796.99   13736.35  156028.11
00:27:44.831  Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:27:44.831  	 Verification LBA range: start 0x0 length 0x400
00:27:44.831  	 Nvme6n1             :       1.20     336.43      21.03       0.00     0.00  173184.26    6212.81  145961.78
00:27:44.831  Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:27:44.831  	 Verification LBA range: start 0x0 length 0x400
00:27:44.831  	 Nvme7n1             :       1.20     357.61      22.35       0.00     0.00  161355.19   13736.35  141767.48
00:27:44.831  Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:27:44.831  	 Verification LBA range: start 0x0 length 0x400
00:27:44.831  	 Nvme8n1             :       1.20     351.30      21.96       0.00     0.00  161531.14   14260.63  136734.31
00:27:44.831  Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:27:44.831  	 Verification LBA range: start 0x0 length 0x400
00:27:44.831  	 Nvme9n1             :       1.20     332.70      20.79       0.00     0.00  167182.27   13054.77  124990.26
00:27:44.831  Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:27:44.831  	 Verification LBA range: start 0x0 length 0x400
00:27:44.831  	 Nvme10n1            :       1.19     268.99      16.81       0.00     0.00  205038.71   12582.91  273468.62
00:27:44.831  
[2024-12-14T12:53:44.569Z]  ===================================================================================================================
00:27:44.831  
[2024-12-14T12:53:44.569Z]  Total                       :               3375.41     210.96       0.00     0.00  174284.97    4875.88  273468.62
00:27:45.766   13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget
00:27:45.766   13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state
00:27:45.766   13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf
00:27:45.766   13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt
00:27:45.766   13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini
00:27:45.766   13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup
00:27:45.766   13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync
00:27:45.766   13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']'
00:27:45.766   13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']'
00:27:45.766   13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e
00:27:45.766   13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20}
00:27:45.766   13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma
00:27:45.766  rmmod nvme_rdma
00:27:45.766  rmmod nvme_fabrics
00:27:45.766   13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:27:46.024   13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e
00:27:46.024   13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0
00:27:46.024   13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 3425298 ']'
00:27:46.024   13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 3425298
00:27:46.024   13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 3425298 ']'
00:27:46.024   13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 3425298
00:27:46.024    13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname
00:27:46.024   13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:27:46.024    13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3425298
00:27:46.024   13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:27:46.024   13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:27:46.024   13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3425298'
00:27:46.024  killing process with pid 3425298
00:27:46.024   13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 3425298
00:27:46.024   13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 3425298
00:27:49.309   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:27:49.309   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]]
00:27:49.309  
00:27:49.309  real	0m19.201s
00:27:49.309  user	0m51.546s
00:27:49.309  sys	0m7.097s
00:27:49.309   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable
00:27:49.309   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x
00:27:49.309  ************************************
00:27:49.309  END TEST nvmf_shutdown_tc1
00:27:49.309  ************************************
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x
00:27:49.569  ************************************
00:27:49.569  START TEST nvmf_shutdown_tc2
00:27:49.569  ************************************
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z rdma ']'
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:27:49.569    13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=()
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=()
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=()
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=()
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=()
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=()
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=()
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]]
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}")
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}")
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]]
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}")
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)'
00:27:49.569  Found 0000:d9:00.0 (0x15b3 - 0x1015)
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)'
00:27:49.569  Found 0000:d9:00.1 (0x15b3 - 0x1015)
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]]
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0'
00:27:49.569  Found net devices under 0000:d9:00.0: mlx_0_0
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:27:49.569   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:27:49.570   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:27:49.570   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:27:49.570   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:27:49.570   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1'
00:27:49.570  Found net devices under 0000:d9:00.1: mlx_0_1
00:27:49.570   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:27:49.570   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:27:49.570   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes
00:27:49.570   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:27:49.570   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]]
00:27:49.570   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]]
00:27:49.570   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # rdma_device_init
00:27:49.570   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@529 -- # load_ib_rdma_modules
00:27:49.570    13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # uname
00:27:49.570   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']'
00:27:49.570   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@66 -- # modprobe ib_cm
00:27:49.570   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@67 -- # modprobe ib_core
00:27:49.570   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@68 -- # modprobe ib_umad
00:27:49.570   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@69 -- # modprobe ib_uverbs
00:27:49.570   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@70 -- # modprobe iw_cm
00:27:49.570   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@71 -- # modprobe rdma_cm
00:27:49.570   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@72 -- # modprobe rdma_ucm
00:27:49.570   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@530 -- # allocate_nic_ips
00:27:49.570   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR ))
00:27:49.570    13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # get_rdma_if_list
00:27:49.570    13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:27:49.570    13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:27:49.570     13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:27:49.570     13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:27:49.570    13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:27:49.570    13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:27:49.570    13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:27:49.570    13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:27:49.570    13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_0
00:27:49.570    13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2
00:27:49.570    13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:27:49.570    13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:27:49.570    13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:27:49.570    13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:27:49.570    13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:27:49.570    13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_1
00:27:49.570    13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2
00:27:49.570   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:27:49.570    13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0
00:27:49.570    13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:27:49.570    13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:27:49.570    13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}'
00:27:49.570    13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1
00:27:49.570   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # ip=192.168.100.8
00:27:49.570   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]]
00:27:49.570   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0
00:27:49.570  6: mlx_0_0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:27:49.570      link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff
00:27:49.570      altname enp217s0f0np0
00:27:49.570      altname ens818f0np0
00:27:49.570      inet 192.168.100.8/24 scope global mlx_0_0
00:27:49.570         valid_lft forever preferred_lft forever
00:27:49.570   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:27:49.570    13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1
00:27:49.570    13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:27:49.570    13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:27:49.570    13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}'
00:27:49.570    13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1
00:27:49.570   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # ip=192.168.100.9
00:27:49.570   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]]
00:27:49.570   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1
00:27:49.570  7: mlx_0_1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:27:49.570      link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff
00:27:49.570      altname enp217s0f1np1
00:27:49.570      altname ens818f1np1
00:27:49.570      inet 192.168.100.9/24 scope global mlx_0_1
00:27:49.570         valid_lft forever preferred_lft forever
00:27:49.570   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0
00:27:49.570   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:27:49.570   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma'
00:27:49.570   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]]
00:27:49.570    13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # get_available_rdma_ips
00:27:49.570     13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # get_rdma_if_list
00:27:49.570     13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:27:49.570     13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:27:49.570      13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:27:49.570      13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:27:49.570     13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:27:49.570     13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:27:49.570     13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:27:49.570     13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:27:49.570     13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_0
00:27:49.570     13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2
00:27:49.570     13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:27:49.570     13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:27:49.570     13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:27:49.570     13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:27:49.570     13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:27:49.570     13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_1
00:27:49.570     13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2
00:27:49.570    13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:27:49.570    13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0
00:27:49.570    13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:27:49.570    13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:27:49.570    13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}'
00:27:49.829    13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1
00:27:49.829    13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:27:49.829    13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1
00:27:49.829    13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:27:49.829    13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:27:49.829    13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}'
00:27:49.829    13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1
00:27:49.829   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8
00:27:49.829  192.168.100.9'
00:27:49.829    13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # echo '192.168.100.8
00:27:49.829  192.168.100.9'
00:27:49.829    13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # head -n 1
00:27:49.829   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8
00:27:49.829    13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # echo '192.168.100.8
00:27:49.829  192.168.100.9'
00:27:49.829    13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # tail -n +2
00:27:49.829    13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # head -n 1
00:27:49.829   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9
00:27:49.829   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']'
00:27:49.829   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024'
00:27:49.829   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']'
00:27:49.829   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']'
00:27:49.829   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-rdma
00:27:49.829   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E
00:27:49.829   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:27:49.829   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable
00:27:49.829   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x
00:27:49.829   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3427474
00:27:49.829   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3427474
00:27:49.829   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E
00:27:49.829   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3427474 ']'
00:27:49.829   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:27:49.829   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100
00:27:49.829   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:27:49.829  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:27:49.829   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable
00:27:49.830   13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x
00:27:49.830  [2024-12-14 13:53:49.462466] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:27:49.830  [2024-12-14 13:53:49.462563] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:27:50.088  [2024-12-14 13:53:49.598083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:27:50.088  [2024-12-14 13:53:49.697481] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:27:50.088  [2024-12-14 13:53:49.697529] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:27:50.088  [2024-12-14 13:53:49.697541] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:27:50.088  [2024-12-14 13:53:49.697570] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:27:50.088  [2024-12-14 13:53:49.697580] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:27:50.088  [2024-12-14 13:53:49.699955] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:27:50.088  [2024-12-14 13:53:49.700019] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:27:50.088  [2024-12-14 13:53:49.700143] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:27:50.088  [2024-12-14 13:53:49.700168] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4
00:27:50.655   13:53:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:27:50.655   13:53:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0
00:27:50.655   13:53:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:27:50.655   13:53:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable
00:27:50.655   13:53:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x
00:27:50.655   13:53:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:27:50.655   13:53:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192
00:27:50.655   13:53:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:50.655   13:53:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x
00:27:50.655  [2024-12-14 13:53:50.358114] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000286c0/0x7fc6bcfbd940) succeed.
00:27:50.655  [2024-12-14 13:53:50.368322] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028840/0x7fc6bcf79940) succeed.
00:27:50.913   13:53:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:50.913   13:53:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10})
00:27:50.913   13:53:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems
00:27:50.913   13:53:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable
00:27:50.913   13:53:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x
00:27:50.913   13:53:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt
00:27:50.913   13:53:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:27:50.913   13:53:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat
00:27:50.913   13:53:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:27:50.913   13:53:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat
00:27:51.172   13:53:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:27:51.172   13:53:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat
00:27:51.172   13:53:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:27:51.172   13:53:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat
00:27:51.172   13:53:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:27:51.172   13:53:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat
00:27:51.172   13:53:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:27:51.172   13:53:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat
00:27:51.172   13:53:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:27:51.172   13:53:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat
00:27:51.172   13:53:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:27:51.172   13:53:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat
00:27:51.172   13:53:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:27:51.172   13:53:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat
00:27:51.172   13:53:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:27:51.172   13:53:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat
00:27:51.172   13:53:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd
00:27:51.172   13:53:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:51.172   13:53:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x
00:27:51.172  Malloc1
00:27:51.172  [2024-12-14 13:53:50.782122] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 ***
00:27:51.172  Malloc2
00:27:51.430  Malloc3
00:27:51.431  Malloc4
00:27:51.431  Malloc5
00:27:51.689  Malloc6
00:27:51.689  Malloc7
00:27:51.689  Malloc8
00:27:51.948  Malloc9
00:27:51.948  Malloc10
00:27:51.948   13:53:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:51.948   13:53:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems
00:27:51.948   13:53:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable
00:27:51.948   13:53:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x
00:27:51.948   13:53:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3428005
00:27:51.948   13:53:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3428005 /var/tmp/bdevperf.sock
00:27:51.948   13:53:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3428005 ']'
00:27:51.948   13:53:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:27:51.948   13:53:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100
00:27:51.948   13:53:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10
00:27:51.948   13:53:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:27:51.948    13:53:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10
00:27:51.948  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:27:51.948   13:53:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable
00:27:51.948    13:53:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=()
00:27:51.948   13:53:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x
00:27:51.948    13:53:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config
00:27:51.948    13:53:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:27:51.948    13:53:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:27:51.948  {
00:27:51.948    "params": {
00:27:51.948      "name": "Nvme$subsystem",
00:27:51.948      "trtype": "$TEST_TRANSPORT",
00:27:51.948      "traddr": "$NVMF_FIRST_TARGET_IP",
00:27:51.948      "adrfam": "ipv4",
00:27:51.948      "trsvcid": "$NVMF_PORT",
00:27:51.948      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:27:51.948      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:27:51.948      "hdgst": ${hdgst:-false},
00:27:51.948      "ddgst": ${ddgst:-false}
00:27:51.948    },
00:27:51.948    "method": "bdev_nvme_attach_controller"
00:27:51.948  }
00:27:51.948  EOF
00:27:51.948  )")
00:27:51.948     13:53:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat
00:27:51.948    13:53:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:27:51.948    13:53:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:27:51.948  {
00:27:51.948    "params": {
00:27:51.948      "name": "Nvme$subsystem",
00:27:51.948      "trtype": "$TEST_TRANSPORT",
00:27:51.948      "traddr": "$NVMF_FIRST_TARGET_IP",
00:27:51.948      "adrfam": "ipv4",
00:27:51.948      "trsvcid": "$NVMF_PORT",
00:27:51.948      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:27:51.948      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:27:51.948      "hdgst": ${hdgst:-false},
00:27:51.948      "ddgst": ${ddgst:-false}
00:27:51.948    },
00:27:51.948    "method": "bdev_nvme_attach_controller"
00:27:51.948  }
00:27:51.948  EOF
00:27:51.948  )")
00:27:51.948     13:53:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat
00:27:51.948    13:53:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:27:51.948    13:53:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:27:51.948  {
00:27:51.948    "params": {
00:27:51.948      "name": "Nvme$subsystem",
00:27:51.948      "trtype": "$TEST_TRANSPORT",
00:27:51.948      "traddr": "$NVMF_FIRST_TARGET_IP",
00:27:51.948      "adrfam": "ipv4",
00:27:51.948      "trsvcid": "$NVMF_PORT",
00:27:51.948      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:27:51.948      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:27:51.948      "hdgst": ${hdgst:-false},
00:27:51.948      "ddgst": ${ddgst:-false}
00:27:51.948    },
00:27:51.948    "method": "bdev_nvme_attach_controller"
00:27:51.948  }
00:27:51.948  EOF
00:27:51.948  )")
00:27:51.948     13:53:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat
00:27:51.948    13:53:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:27:51.948    13:53:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:27:51.948  {
00:27:51.948    "params": {
00:27:51.948      "name": "Nvme$subsystem",
00:27:51.948      "trtype": "$TEST_TRANSPORT",
00:27:51.948      "traddr": "$NVMF_FIRST_TARGET_IP",
00:27:51.948      "adrfam": "ipv4",
00:27:51.948      "trsvcid": "$NVMF_PORT",
00:27:51.948      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:27:51.948      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:27:51.948      "hdgst": ${hdgst:-false},
00:27:51.948      "ddgst": ${ddgst:-false}
00:27:51.948    },
00:27:51.948    "method": "bdev_nvme_attach_controller"
00:27:51.948  }
00:27:51.948  EOF
00:27:51.948  )")
00:27:51.948     13:53:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat
00:27:52.208    13:53:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:27:52.208    13:53:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:27:52.208  {
00:27:52.208    "params": {
00:27:52.208      "name": "Nvme$subsystem",
00:27:52.208      "trtype": "$TEST_TRANSPORT",
00:27:52.208      "traddr": "$NVMF_FIRST_TARGET_IP",
00:27:52.208      "adrfam": "ipv4",
00:27:52.208      "trsvcid": "$NVMF_PORT",
00:27:52.208      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:27:52.208      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:27:52.208      "hdgst": ${hdgst:-false},
00:27:52.208      "ddgst": ${ddgst:-false}
00:27:52.208    },
00:27:52.208    "method": "bdev_nvme_attach_controller"
00:27:52.208  }
00:27:52.208  EOF
00:27:52.208  )")
00:27:52.208     13:53:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat
00:27:52.208    13:53:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:27:52.208    13:53:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:27:52.208  {
00:27:52.208    "params": {
00:27:52.208      "name": "Nvme$subsystem",
00:27:52.208      "trtype": "$TEST_TRANSPORT",
00:27:52.208      "traddr": "$NVMF_FIRST_TARGET_IP",
00:27:52.208      "adrfam": "ipv4",
00:27:52.208      "trsvcid": "$NVMF_PORT",
00:27:52.208      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:27:52.208      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:27:52.208      "hdgst": ${hdgst:-false},
00:27:52.208      "ddgst": ${ddgst:-false}
00:27:52.208    },
00:27:52.208    "method": "bdev_nvme_attach_controller"
00:27:52.208  }
00:27:52.208  EOF
00:27:52.208  )")
00:27:52.208     13:53:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat
00:27:52.208    13:53:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:27:52.208    13:53:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:27:52.208  {
00:27:52.208    "params": {
00:27:52.208      "name": "Nvme$subsystem",
00:27:52.208      "trtype": "$TEST_TRANSPORT",
00:27:52.208      "traddr": "$NVMF_FIRST_TARGET_IP",
00:27:52.208      "adrfam": "ipv4",
00:27:52.208      "trsvcid": "$NVMF_PORT",
00:27:52.208      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:27:52.208      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:27:52.208      "hdgst": ${hdgst:-false},
00:27:52.208      "ddgst": ${ddgst:-false}
00:27:52.208    },
00:27:52.208    "method": "bdev_nvme_attach_controller"
00:27:52.208  }
00:27:52.208  EOF
00:27:52.208  )")
00:27:52.208     13:53:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat
00:27:52.208    13:53:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:27:52.208    13:53:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:27:52.208  {
00:27:52.208    "params": {
00:27:52.208      "name": "Nvme$subsystem",
00:27:52.208      "trtype": "$TEST_TRANSPORT",
00:27:52.208      "traddr": "$NVMF_FIRST_TARGET_IP",
00:27:52.208      "adrfam": "ipv4",
00:27:52.208      "trsvcid": "$NVMF_PORT",
00:27:52.208      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:27:52.208      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:27:52.208      "hdgst": ${hdgst:-false},
00:27:52.208      "ddgst": ${ddgst:-false}
00:27:52.208    },
00:27:52.208    "method": "bdev_nvme_attach_controller"
00:27:52.208  }
00:27:52.208  EOF
00:27:52.208  )")
00:27:52.208     13:53:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat
00:27:52.208    13:53:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:27:52.208    13:53:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:27:52.208  {
00:27:52.208    "params": {
00:27:52.208      "name": "Nvme$subsystem",
00:27:52.208      "trtype": "$TEST_TRANSPORT",
00:27:52.208      "traddr": "$NVMF_FIRST_TARGET_IP",
00:27:52.208      "adrfam": "ipv4",
00:27:52.208      "trsvcid": "$NVMF_PORT",
00:27:52.208      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:27:52.208      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:27:52.208      "hdgst": ${hdgst:-false},
00:27:52.208      "ddgst": ${ddgst:-false}
00:27:52.208    },
00:27:52.208    "method": "bdev_nvme_attach_controller"
00:27:52.208  }
00:27:52.208  EOF
00:27:52.208  )")
00:27:52.208     13:53:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat
00:27:52.208    13:53:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:27:52.208    13:53:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:27:52.208  {
00:27:52.208    "params": {
00:27:52.208      "name": "Nvme$subsystem",
00:27:52.208      "trtype": "$TEST_TRANSPORT",
00:27:52.208      "traddr": "$NVMF_FIRST_TARGET_IP",
00:27:52.208      "adrfam": "ipv4",
00:27:52.208      "trsvcid": "$NVMF_PORT",
00:27:52.208      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:27:52.208      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:27:52.208      "hdgst": ${hdgst:-false},
00:27:52.208      "ddgst": ${ddgst:-false}
00:27:52.208    },
00:27:52.208    "method": "bdev_nvme_attach_controller"
00:27:52.208  }
00:27:52.208  EOF
00:27:52.208  )")
00:27:52.208     13:53:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat
00:27:52.208  [2024-12-14 13:53:51.733840] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:27:52.208  [2024-12-14 13:53:51.733935] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3428005 ]
00:27:52.208    13:53:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq .
00:27:52.208     13:53:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=,
00:27:52.208     13:53:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:27:52.208    "params": {
00:27:52.208      "name": "Nvme1",
00:27:52.208      "trtype": "rdma",
00:27:52.208      "traddr": "192.168.100.8",
00:27:52.208      "adrfam": "ipv4",
00:27:52.208      "trsvcid": "4420",
00:27:52.208      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:27:52.208      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:27:52.208      "hdgst": false,
00:27:52.208      "ddgst": false
00:27:52.208    },
00:27:52.208    "method": "bdev_nvme_attach_controller"
00:27:52.208  },{
00:27:52.208    "params": {
00:27:52.208      "name": "Nvme2",
00:27:52.208      "trtype": "rdma",
00:27:52.208      "traddr": "192.168.100.8",
00:27:52.208      "adrfam": "ipv4",
00:27:52.208      "trsvcid": "4420",
00:27:52.208      "subnqn": "nqn.2016-06.io.spdk:cnode2",
00:27:52.208      "hostnqn": "nqn.2016-06.io.spdk:host2",
00:27:52.208      "hdgst": false,
00:27:52.208      "ddgst": false
00:27:52.208    },
00:27:52.208    "method": "bdev_nvme_attach_controller"
00:27:52.209  },{
00:27:52.209    "params": {
00:27:52.209      "name": "Nvme3",
00:27:52.209      "trtype": "rdma",
00:27:52.209      "traddr": "192.168.100.8",
00:27:52.209      "adrfam": "ipv4",
00:27:52.209      "trsvcid": "4420",
00:27:52.209      "subnqn": "nqn.2016-06.io.spdk:cnode3",
00:27:52.209      "hostnqn": "nqn.2016-06.io.spdk:host3",
00:27:52.209      "hdgst": false,
00:27:52.209      "ddgst": false
00:27:52.209    },
00:27:52.209    "method": "bdev_nvme_attach_controller"
00:27:52.209  },{
00:27:52.209    "params": {
00:27:52.209      "name": "Nvme4",
00:27:52.209      "trtype": "rdma",
00:27:52.209      "traddr": "192.168.100.8",
00:27:52.209      "adrfam": "ipv4",
00:27:52.209      "trsvcid": "4420",
00:27:52.209      "subnqn": "nqn.2016-06.io.spdk:cnode4",
00:27:52.209      "hostnqn": "nqn.2016-06.io.spdk:host4",
00:27:52.209      "hdgst": false,
00:27:52.209      "ddgst": false
00:27:52.209    },
00:27:52.209    "method": "bdev_nvme_attach_controller"
00:27:52.209  },{
00:27:52.209    "params": {
00:27:52.209      "name": "Nvme5",
00:27:52.209      "trtype": "rdma",
00:27:52.209      "traddr": "192.168.100.8",
00:27:52.209      "adrfam": "ipv4",
00:27:52.209      "trsvcid": "4420",
00:27:52.209      "subnqn": "nqn.2016-06.io.spdk:cnode5",
00:27:52.209      "hostnqn": "nqn.2016-06.io.spdk:host5",
00:27:52.209      "hdgst": false,
00:27:52.209      "ddgst": false
00:27:52.209    },
00:27:52.209    "method": "bdev_nvme_attach_controller"
00:27:52.209  },{
00:27:52.209    "params": {
00:27:52.209      "name": "Nvme6",
00:27:52.209      "trtype": "rdma",
00:27:52.209      "traddr": "192.168.100.8",
00:27:52.209      "adrfam": "ipv4",
00:27:52.209      "trsvcid": "4420",
00:27:52.209      "subnqn": "nqn.2016-06.io.spdk:cnode6",
00:27:52.209      "hostnqn": "nqn.2016-06.io.spdk:host6",
00:27:52.209      "hdgst": false,
00:27:52.209      "ddgst": false
00:27:52.209    },
00:27:52.209    "method": "bdev_nvme_attach_controller"
00:27:52.209  },{
00:27:52.209    "params": {
00:27:52.209      "name": "Nvme7",
00:27:52.209      "trtype": "rdma",
00:27:52.209      "traddr": "192.168.100.8",
00:27:52.209      "adrfam": "ipv4",
00:27:52.209      "trsvcid": "4420",
00:27:52.209      "subnqn": "nqn.2016-06.io.spdk:cnode7",
00:27:52.209      "hostnqn": "nqn.2016-06.io.spdk:host7",
00:27:52.209      "hdgst": false,
00:27:52.209      "ddgst": false
00:27:52.209    },
00:27:52.209    "method": "bdev_nvme_attach_controller"
00:27:52.209  },{
00:27:52.209    "params": {
00:27:52.209      "name": "Nvme8",
00:27:52.209      "trtype": "rdma",
00:27:52.209      "traddr": "192.168.100.8",
00:27:52.209      "adrfam": "ipv4",
00:27:52.209      "trsvcid": "4420",
00:27:52.209      "subnqn": "nqn.2016-06.io.spdk:cnode8",
00:27:52.209      "hostnqn": "nqn.2016-06.io.spdk:host8",
00:27:52.209      "hdgst": false,
00:27:52.209      "ddgst": false
00:27:52.209    },
00:27:52.209    "method": "bdev_nvme_attach_controller"
00:27:52.209  },{
00:27:52.209    "params": {
00:27:52.209      "name": "Nvme9",
00:27:52.209      "trtype": "rdma",
00:27:52.209      "traddr": "192.168.100.8",
00:27:52.209      "adrfam": "ipv4",
00:27:52.209      "trsvcid": "4420",
00:27:52.209      "subnqn": "nqn.2016-06.io.spdk:cnode9",
00:27:52.209      "hostnqn": "nqn.2016-06.io.spdk:host9",
00:27:52.209      "hdgst": false,
00:27:52.209      "ddgst": false
00:27:52.209    },
00:27:52.209    "method": "bdev_nvme_attach_controller"
00:27:52.209  },{
00:27:52.209    "params": {
00:27:52.209      "name": "Nvme10",
00:27:52.209      "trtype": "rdma",
00:27:52.209      "traddr": "192.168.100.8",
00:27:52.209      "adrfam": "ipv4",
00:27:52.209      "trsvcid": "4420",
00:27:52.209      "subnqn": "nqn.2016-06.io.spdk:cnode10",
00:27:52.209      "hostnqn": "nqn.2016-06.io.spdk:host10",
00:27:52.209      "hdgst": false,
00:27:52.209      "ddgst": false
00:27:52.209    },
00:27:52.209    "method": "bdev_nvme_attach_controller"
00:27:52.209  }'
00:27:52.209  [2024-12-14 13:53:51.869224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:27:52.467  [2024-12-14 13:53:51.973496] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:27:53.403  Running I/O for 10 seconds...
00:27:53.403   13:53:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:27:53.403   13:53:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0
00:27:53.403   13:53:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init
00:27:53.403   13:53:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:53.403   13:53:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x
00:27:53.662   13:53:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:53.662   13:53:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1
00:27:53.662   13:53:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']'
00:27:53.662   13:53:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']'
00:27:53.662   13:53:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1
00:27:53.662   13:53:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i
00:27:53.662   13:53:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 ))
00:27:53.662   13:53:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 ))
00:27:53.662    13:53:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1
00:27:53.662    13:53:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops'
00:27:53.662    13:53:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:53.662    13:53:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x
00:27:53.920    13:53:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:53.920   13:53:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3
00:27:53.920   13:53:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']'
00:27:53.920   13:53:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25
00:27:54.179   13:53:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- ))
00:27:54.179   13:53:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 ))
00:27:54.179    13:53:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1
00:27:54.179    13:53:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops'
00:27:54.179    13:53:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:54.179    13:53:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x
00:27:54.179    13:53:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:54.179   13:53:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=155
00:27:54.179   13:53:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 155 -ge 100 ']'
00:27:54.179   13:53:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0
00:27:54.179   13:53:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break
00:27:54.179   13:53:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0
00:27:54.179   13:53:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3428005
00:27:54.179   13:53:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3428005 ']'
00:27:54.179   13:53:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3428005
00:27:54.179    13:53:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname
00:27:54.179   13:53:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:27:54.179    13:53:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3428005
00:27:54.438   13:53:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:27:54.438   13:53:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:27:54.438   13:53:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3428005'
00:27:54.438  killing process with pid 3428005
00:27:54.438   13:53:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3428005
00:27:54.438   13:53:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3428005
00:27:54.438  Received shutdown signal, test time was about 0.895135 seconds
00:27:54.438  
00:27:54.438                                                                                                  Latency(us)
00:27:54.438  
[2024-12-14T12:53:54.176Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:27:54.438  Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:27:54.438  	 Verification LBA range: start 0x0 length 0x400
00:27:54.438  	 Nvme1n1             :       0.88     318.91      19.93       0.00     0.00  196018.36   10013.90  211392.92
00:27:54.438  Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:27:54.438  	 Verification LBA range: start 0x0 length 0x400
00:27:54.438  	 Nvme2n1             :       0.88     315.06      19.69       0.00     0.00  194344.57    9909.04  198810.01
00:27:54.438  Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:27:54.438  	 Verification LBA range: start 0x0 length 0x400
00:27:54.438  	 Nvme3n1             :       0.88     325.93      20.37       0.00     0.00  184405.11    5111.81  190421.40
00:27:54.438  Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:27:54.438  	 Verification LBA range: start 0x0 length 0x400
00:27:54.438  	 Nvme4n1             :       0.88     362.79      22.67       0.00     0.00  162590.76    5767.17  145122.92
00:27:54.438  Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:27:54.438  	 Verification LBA range: start 0x0 length 0x400
00:27:54.438  	 Nvme5n1             :       0.88     352.98      22.06       0.00     0.00  164238.43   10747.90  167772.16
00:27:54.438  Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:27:54.438  	 Verification LBA range: start 0x0 length 0x400
00:27:54.438  	 Nvme6n1             :       0.89     361.14      22.57       0.00     0.00  157878.68   11901.34  127506.84
00:27:54.438  Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:27:54.438  	 Verification LBA range: start 0x0 length 0x400
00:27:54.438  	 Nvme7n1             :       0.89     360.50      22.53       0.00     0.00  153887.46   12582.91  118279.37
00:27:54.438  Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:27:54.438  	 Verification LBA range: start 0x0 length 0x400
00:27:54.438  	 Nvme8n1             :       0.89     359.63      22.48       0.00     0.00  151981.59   13526.63  109051.90
00:27:54.438  Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:27:54.438  	 Verification LBA range: start 0x0 length 0x400
00:27:54.438  	 Nvme9n1             :       0.89     358.75      22.42       0.00     0.00  149214.99   14889.78  108213.04
00:27:54.438  Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:27:54.438  	 Verification LBA range: start 0x0 length 0x400
00:27:54.438  	 Nvme10n1            :       0.89     286.30      17.89       0.00     0.00  182634.50   11062.48  219781.53
00:27:54.438  
[2024-12-14T12:53:54.177Z]  ===================================================================================================================
00:27:54.439  
[2024-12-14T12:53:54.177Z]  Total                       :               3401.97     212.62       0.00     0.00  168597.87    5111.81  219781.53
00:27:55.817   13:53:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1
00:27:56.481   13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3427474
00:27:56.481   13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget
00:27:56.481   13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state
00:27:56.481   13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf
00:27:56.481   13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt
00:27:56.481   13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini
00:27:56.481   13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup
00:27:56.481   13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync
00:27:56.481   13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']'
00:27:56.481   13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']'
00:27:56.481   13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e
00:27:56.481   13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20}
00:27:56.481   13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma
00:27:56.481  rmmod nvme_rdma
00:27:56.481  rmmod nvme_fabrics
00:27:56.481   13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:27:56.481   13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e
00:27:56.481   13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0
00:27:56.481   13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 3427474 ']'
00:27:56.481   13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 3427474
00:27:56.481   13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3427474 ']'
00:27:56.481   13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3427474
00:27:56.481    13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname
00:27:56.481   13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:27:56.481    13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3427474
00:27:56.766   13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:27:56.766   13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:27:56.766   13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3427474'
00:27:56.766  killing process with pid 3427474
00:27:56.766   13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3427474
00:27:56.766   13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3427474
00:28:00.048   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:28:00.048   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]]
00:28:00.048  
00:28:00.048  real	0m10.621s
00:28:00.048  user	0m41.366s
00:28:00.048  sys	0m1.663s
00:28:00.048   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable
00:28:00.048   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x
00:28:00.048  ************************************
00:28:00.048  END TEST nvmf_shutdown_tc2
00:28:00.048  ************************************
00:28:00.048   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3
00:28:00.048   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:28:00.048   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable
00:28:00.048   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x
00:28:00.308  ************************************
00:28:00.308  START TEST nvmf_shutdown_tc3
00:28:00.308  ************************************
00:28:00.308   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3
00:28:00.308   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget
00:28:00.308   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit
00:28:00.308   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z rdma ']'
00:28:00.308   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:28:00.308   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs
00:28:00.308   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no
00:28:00.308   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns
00:28:00.308   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:28:00.308   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:28:00.308    13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:28:00.308   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:28:00.308   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:28:00.308   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable
00:28:00.308   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x
00:28:00.308   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:28:00.308   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=()
00:28:00.308   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs
00:28:00.308   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=()
00:28:00.308   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:28:00.308   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=()
00:28:00.308   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers
00:28:00.308   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=()
00:28:00.308   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs
00:28:00.308   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=()
00:28:00.308   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810
00:28:00.308   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=()
00:28:00.308   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722
00:28:00.308   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=()
00:28:00.308   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx
00:28:00.308   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:28:00.308   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:28:00.308   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:28:00.308   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:28:00.308   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:28:00.308   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:28:00.308   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:28:00.308   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:28:00.308   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:28:00.308   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:28:00.308   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:28:00.308   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:28:00.308   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:28:00.308   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]]
00:28:00.308   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}")
00:28:00.308   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}")
00:28:00.308   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]]
00:28:00.308   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}")
00:28:00.308   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:28:00.308   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:28:00.308   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)'
00:28:00.308  Found 0000:d9:00.0 (0x15b3 - 0x1015)
00:28:00.308   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:28:00.308   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:28:00.308   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:28:00.309   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:28:00.309   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:28:00.309   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:28:00.309   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:28:00.309   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)'
00:28:00.309  Found 0000:d9:00.1 (0x15b3 - 0x1015)
00:28:00.309   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:28:00.309   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:28:00.309   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:28:00.309   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:28:00.309   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:28:00.309   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:28:00.309   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:28:00.309   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]]
00:28:00.309   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:28:00.309   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:28:00.309   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:28:00.309   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:28:00.309   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:28:00.309   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0'
00:28:00.309  Found net devices under 0000:d9:00.0: mlx_0_0
00:28:00.309   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:28:00.309   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:28:00.309   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:28:00.309   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:28:00.309   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:28:00.309   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:28:00.309   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1'
00:28:00.309  Found net devices under 0000:d9:00.1: mlx_0_1
00:28:00.309   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:28:00.309   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:28:00.309   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes
00:28:00.309   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:28:00.309   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]]
00:28:00.309   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]]
00:28:00.309   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # rdma_device_init
00:28:00.309   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@529 -- # load_ib_rdma_modules
00:28:00.309    13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # uname
00:28:00.309   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']'
00:28:00.309   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@66 -- # modprobe ib_cm
00:28:00.309   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@67 -- # modprobe ib_core
00:28:00.309   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@68 -- # modprobe ib_umad
00:28:00.309   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@69 -- # modprobe ib_uverbs
00:28:00.309   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@70 -- # modprobe iw_cm
00:28:00.309   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@71 -- # modprobe rdma_cm
00:28:00.309   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@72 -- # modprobe rdma_ucm
00:28:00.309   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@530 -- # allocate_nic_ips
00:28:00.309   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR ))
00:28:00.309    13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # get_rdma_if_list
00:28:00.309    13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:28:00.309    13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:28:00.309     13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:28:00.309     13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:28:00.309    13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:28:00.309    13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:28:00.309    13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:28:00.309    13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:28:00.309    13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_0
00:28:00.309    13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2
00:28:00.309    13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:28:00.309    13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:28:00.309    13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:28:00.309    13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:28:00.309    13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:28:00.309    13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_1
00:28:00.309    13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2
00:28:00.309   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:28:00.309    13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0
00:28:00.309    13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:28:00.309    13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:28:00.309    13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}'
00:28:00.309    13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1
00:28:00.309   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # ip=192.168.100.8
00:28:00.309   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]]
00:28:00.309   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0
00:28:00.309  6: mlx_0_0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:28:00.309      link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff
00:28:00.309      altname enp217s0f0np0
00:28:00.309      altname ens818f0np0
00:28:00.309      inet 192.168.100.8/24 scope global mlx_0_0
00:28:00.309         valid_lft forever preferred_lft forever
00:28:00.309   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:28:00.309    13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1
00:28:00.309    13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:28:00.309    13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:28:00.309    13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}'
00:28:00.309    13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1
00:28:00.309   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # ip=192.168.100.9
00:28:00.309   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]]
00:28:00.309   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1
00:28:00.309  7: mlx_0_1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:28:00.309      link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff
00:28:00.309      altname enp217s0f1np1
00:28:00.309      altname ens818f1np1
00:28:00.309      inet 192.168.100.9/24 scope global mlx_0_1
00:28:00.309         valid_lft forever preferred_lft forever
00:28:00.309   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0
00:28:00.309   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:28:00.309   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma'
00:28:00.309   13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]]
00:28:00.309    13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # get_available_rdma_ips
00:28:00.309     13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # get_rdma_if_list
00:28:00.309     13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:28:00.309     13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:28:00.309      13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:28:00.309      13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:28:00.309     13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:28:00.309     13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:28:00.310     13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:28:00.310     13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:28:00.310     13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_0
00:28:00.310     13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2
00:28:00.310     13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:28:00.310     13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:28:00.310     13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:28:00.310     13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:28:00.310     13:53:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:28:00.310     13:54:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_1
00:28:00.310     13:54:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2
00:28:00.310    13:54:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:28:00.310    13:54:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0
00:28:00.310    13:54:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:28:00.310    13:54:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:28:00.310    13:54:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}'
00:28:00.310    13:54:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1
00:28:00.310    13:54:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:28:00.310    13:54:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1
00:28:00.310    13:54:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:28:00.310    13:54:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:28:00.310    13:54:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}'
00:28:00.310    13:54:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1
00:28:00.310   13:54:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8
00:28:00.310  192.168.100.9'
00:28:00.310    13:54:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # echo '192.168.100.8
00:28:00.310  192.168.100.9'
00:28:00.310    13:54:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # head -n 1
00:28:00.310   13:54:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8
00:28:00.310    13:54:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # echo '192.168.100.8
00:28:00.310  192.168.100.9'
00:28:00.310    13:54:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # tail -n +2
00:28:00.310    13:54:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # head -n 1
00:28:00.568   13:54:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9
00:28:00.568   13:54:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']'
00:28:00.568   13:54:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024'
00:28:00.568   13:54:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']'
00:28:00.568   13:54:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']'
00:28:00.568   13:54:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-rdma
00:28:00.568   13:54:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E
00:28:00.568   13:54:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:28:00.568   13:54:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable
00:28:00.568   13:54:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x
00:28:00.568   13:54:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=3429521
00:28:00.568   13:54:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 3429521
00:28:00.568   13:54:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E
00:28:00.568   13:54:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3429521 ']'
00:28:00.568   13:54:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:28:00.568   13:54:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100
00:28:00.568   13:54:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:28:00.568  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:28:00.568   13:54:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable
00:28:00.568   13:54:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x
00:28:00.568  [2024-12-14 13:54:00.170558] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:28:00.568  [2024-12-14 13:54:00.170651] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:28:00.826  [2024-12-14 13:54:00.312293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:28:00.826  [2024-12-14 13:54:00.416107] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:28:00.826  [2024-12-14 13:54:00.416157] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:28:00.826  [2024-12-14 13:54:00.416171] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:28:00.826  [2024-12-14 13:54:00.416184] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:28:00.826  [2024-12-14 13:54:00.416195] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:28:00.826  [2024-12-14 13:54:00.418846] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:28:00.826  [2024-12-14 13:54:00.418921] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:28:00.826  [2024-12-14 13:54:00.419032] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:28:00.826  [2024-12-14 13:54:00.419056] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4
00:28:01.392   13:54:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:28:01.392   13:54:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0
00:28:01.392   13:54:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:28:01.392   13:54:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable
00:28:01.392   13:54:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x
00:28:01.392   13:54:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:28:01.392   13:54:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192
00:28:01.392   13:54:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:01.392   13:54:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x
00:28:01.392  [2024-12-14 13:54:01.065854] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000286c0/0x7f0bc09bd940) succeed.
00:28:01.392  [2024-12-14 13:54:01.075916] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028840/0x7f0bc0979940) succeed.
00:28:01.650   13:54:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:01.650   13:54:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10})
00:28:01.650   13:54:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems
00:28:01.650   13:54:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable
00:28:01.650   13:54:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x
00:28:01.650   13:54:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt
00:28:01.650   13:54:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:28:01.650   13:54:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat
00:28:01.650   13:54:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:28:01.650   13:54:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat
00:28:01.650   13:54:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:28:01.650   13:54:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat
00:28:01.650   13:54:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:28:01.650   13:54:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat
00:28:01.650   13:54:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:28:01.650   13:54:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat
00:28:01.650   13:54:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:28:01.650   13:54:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat
00:28:01.650   13:54:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:28:01.650   13:54:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat
00:28:01.650   13:54:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:28:01.650   13:54:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat
00:28:01.650   13:54:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:28:01.650   13:54:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat
00:28:01.908   13:54:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:28:01.908   13:54:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat
00:28:01.908   13:54:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd
00:28:01.908   13:54:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:01.908   13:54:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x
00:28:01.908  Malloc1
00:28:01.908  [2024-12-14 13:54:01.490246] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 ***
00:28:01.908  Malloc2
00:28:02.166  Malloc3
00:28:02.166  Malloc4
00:28:02.166  Malloc5
00:28:02.424  Malloc6
00:28:02.424  Malloc7
00:28:02.424  Malloc8
00:28:02.682  Malloc9
00:28:02.682  Malloc10
00:28:02.682   13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:02.682   13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems
00:28:02.682   13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable
00:28:02.682   13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x
00:28:02.682   13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3429954
00:28:02.682   13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3429954 /var/tmp/bdevperf.sock
00:28:02.682   13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3429954 ']'
00:28:02.682   13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10
00:28:02.682    13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10
00:28:02.682   13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:28:02.682    13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=()
00:28:02.682    13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config
00:28:02.682   13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100
00:28:02.682    13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:28:02.682    13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:28:02.682  {
00:28:02.682    "params": {
00:28:02.682      "name": "Nvme$subsystem",
00:28:02.682      "trtype": "$TEST_TRANSPORT",
00:28:02.682      "traddr": "$NVMF_FIRST_TARGET_IP",
00:28:02.682      "adrfam": "ipv4",
00:28:02.682      "trsvcid": "$NVMF_PORT",
00:28:02.682      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:28:02.682      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:28:02.682      "hdgst": ${hdgst:-false},
00:28:02.682      "ddgst": ${ddgst:-false}
00:28:02.682    },
00:28:02.682    "method": "bdev_nvme_attach_controller"
00:28:02.682  }
00:28:02.682  EOF
00:28:02.682  )")
00:28:02.682   13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:28:02.682  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:28:02.682   13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable
00:28:02.682   13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x
00:28:02.682     13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat
00:28:02.682    13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:28:02.682    13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:28:02.682  {
00:28:02.682    "params": {
00:28:02.682      "name": "Nvme$subsystem",
00:28:02.682      "trtype": "$TEST_TRANSPORT",
00:28:02.682      "traddr": "$NVMF_FIRST_TARGET_IP",
00:28:02.682      "adrfam": "ipv4",
00:28:02.682      "trsvcid": "$NVMF_PORT",
00:28:02.682      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:28:02.682      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:28:02.682      "hdgst": ${hdgst:-false},
00:28:02.682      "ddgst": ${ddgst:-false}
00:28:02.682    },
00:28:02.682    "method": "bdev_nvme_attach_controller"
00:28:02.682  }
00:28:02.682  EOF
00:28:02.682  )")
00:28:02.682     13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat
00:28:02.682    13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:28:02.682    13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:28:02.682  {
00:28:02.682    "params": {
00:28:02.682      "name": "Nvme$subsystem",
00:28:02.682      "trtype": "$TEST_TRANSPORT",
00:28:02.682      "traddr": "$NVMF_FIRST_TARGET_IP",
00:28:02.682      "adrfam": "ipv4",
00:28:02.682      "trsvcid": "$NVMF_PORT",
00:28:02.682      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:28:02.682      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:28:02.682      "hdgst": ${hdgst:-false},
00:28:02.682      "ddgst": ${ddgst:-false}
00:28:02.682    },
00:28:02.682    "method": "bdev_nvme_attach_controller"
00:28:02.682  }
00:28:02.682  EOF
00:28:02.682  )")
00:28:02.682     13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat
00:28:02.682    13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:28:02.682    13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:28:02.682  {
00:28:02.682    "params": {
00:28:02.682      "name": "Nvme$subsystem",
00:28:02.682      "trtype": "$TEST_TRANSPORT",
00:28:02.682      "traddr": "$NVMF_FIRST_TARGET_IP",
00:28:02.682      "adrfam": "ipv4",
00:28:02.682      "trsvcid": "$NVMF_PORT",
00:28:02.682      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:28:02.682      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:28:02.682      "hdgst": ${hdgst:-false},
00:28:02.682      "ddgst": ${ddgst:-false}
00:28:02.682    },
00:28:02.682    "method": "bdev_nvme_attach_controller"
00:28:02.682  }
00:28:02.682  EOF
00:28:02.682  )")
00:28:02.682     13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat
00:28:02.682    13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:28:02.682    13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:28:02.682  {
00:28:02.682    "params": {
00:28:02.682      "name": "Nvme$subsystem",
00:28:02.682      "trtype": "$TEST_TRANSPORT",
00:28:02.682      "traddr": "$NVMF_FIRST_TARGET_IP",
00:28:02.682      "adrfam": "ipv4",
00:28:02.682      "trsvcid": "$NVMF_PORT",
00:28:02.682      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:28:02.682      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:28:02.682      "hdgst": ${hdgst:-false},
00:28:02.682      "ddgst": ${ddgst:-false}
00:28:02.682    },
00:28:02.682    "method": "bdev_nvme_attach_controller"
00:28:02.682  }
00:28:02.682  EOF
00:28:02.682  )")
00:28:02.682     13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat
00:28:02.682    13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:28:02.682    13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:28:02.682  {
00:28:02.682    "params": {
00:28:02.682      "name": "Nvme$subsystem",
00:28:02.682      "trtype": "$TEST_TRANSPORT",
00:28:02.682      "traddr": "$NVMF_FIRST_TARGET_IP",
00:28:02.682      "adrfam": "ipv4",
00:28:02.682      "trsvcid": "$NVMF_PORT",
00:28:02.682      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:28:02.682      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:28:02.682      "hdgst": ${hdgst:-false},
00:28:02.682      "ddgst": ${ddgst:-false}
00:28:02.682    },
00:28:02.682    "method": "bdev_nvme_attach_controller"
00:28:02.682  }
00:28:02.682  EOF
00:28:02.682  )")
00:28:02.682     13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat
00:28:02.941    13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:28:02.941    13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:28:02.941  {
00:28:02.941    "params": {
00:28:02.941      "name": "Nvme$subsystem",
00:28:02.941      "trtype": "$TEST_TRANSPORT",
00:28:02.941      "traddr": "$NVMF_FIRST_TARGET_IP",
00:28:02.941      "adrfam": "ipv4",
00:28:02.941      "trsvcid": "$NVMF_PORT",
00:28:02.941      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:28:02.941      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:28:02.941      "hdgst": ${hdgst:-false},
00:28:02.941      "ddgst": ${ddgst:-false}
00:28:02.941    },
00:28:02.941    "method": "bdev_nvme_attach_controller"
00:28:02.941  }
00:28:02.941  EOF
00:28:02.941  )")
00:28:02.941     13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat
00:28:02.941    13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:28:02.941    13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:28:02.941  {
00:28:02.941    "params": {
00:28:02.941      "name": "Nvme$subsystem",
00:28:02.941      "trtype": "$TEST_TRANSPORT",
00:28:02.941      "traddr": "$NVMF_FIRST_TARGET_IP",
00:28:02.941      "adrfam": "ipv4",
00:28:02.941      "trsvcid": "$NVMF_PORT",
00:28:02.941      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:28:02.941      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:28:02.941      "hdgst": ${hdgst:-false},
00:28:02.941      "ddgst": ${ddgst:-false}
00:28:02.941    },
00:28:02.941    "method": "bdev_nvme_attach_controller"
00:28:02.941  }
00:28:02.941  EOF
00:28:02.941  )")
00:28:02.941     13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat
00:28:02.941    13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:28:02.941    13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:28:02.941  {
00:28:02.941    "params": {
00:28:02.941      "name": "Nvme$subsystem",
00:28:02.941      "trtype": "$TEST_TRANSPORT",
00:28:02.941      "traddr": "$NVMF_FIRST_TARGET_IP",
00:28:02.941      "adrfam": "ipv4",
00:28:02.941      "trsvcid": "$NVMF_PORT",
00:28:02.941      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:28:02.941      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:28:02.941      "hdgst": ${hdgst:-false},
00:28:02.941      "ddgst": ${ddgst:-false}
00:28:02.941    },
00:28:02.941    "method": "bdev_nvme_attach_controller"
00:28:02.941  }
00:28:02.941  EOF
00:28:02.941  )")
00:28:02.941     13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat
00:28:02.941    13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:28:02.941    13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:28:02.941  {
00:28:02.941    "params": {
00:28:02.941      "name": "Nvme$subsystem",
00:28:02.941      "trtype": "$TEST_TRANSPORT",
00:28:02.941      "traddr": "$NVMF_FIRST_TARGET_IP",
00:28:02.941      "adrfam": "ipv4",
00:28:02.941      "trsvcid": "$NVMF_PORT",
00:28:02.941      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:28:02.941      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:28:02.941      "hdgst": ${hdgst:-false},
00:28:02.941      "ddgst": ${ddgst:-false}
00:28:02.941    },
00:28:02.941    "method": "bdev_nvme_attach_controller"
00:28:02.941  }
00:28:02.941  EOF
00:28:02.941  )")
00:28:02.941  [2024-12-14 13:54:02.450354] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:28:02.941  [2024-12-14 13:54:02.450441] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3429954 ]
00:28:02.941     13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat
00:28:02.941    13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq .
00:28:02.941     13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=,
00:28:02.941     13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:28:02.941    "params": {
00:28:02.941      "name": "Nvme1",
00:28:02.941      "trtype": "rdma",
00:28:02.941      "traddr": "192.168.100.8",
00:28:02.941      "adrfam": "ipv4",
00:28:02.941      "trsvcid": "4420",
00:28:02.941      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:28:02.941      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:28:02.941      "hdgst": false,
00:28:02.941      "ddgst": false
00:28:02.941    },
00:28:02.941    "method": "bdev_nvme_attach_controller"
00:28:02.941  },{
00:28:02.941    "params": {
00:28:02.941      "name": "Nvme2",
00:28:02.941      "trtype": "rdma",
00:28:02.942      "traddr": "192.168.100.8",
00:28:02.942      "adrfam": "ipv4",
00:28:02.942      "trsvcid": "4420",
00:28:02.942      "subnqn": "nqn.2016-06.io.spdk:cnode2",
00:28:02.942      "hostnqn": "nqn.2016-06.io.spdk:host2",
00:28:02.942      "hdgst": false,
00:28:02.942      "ddgst": false
00:28:02.942    },
00:28:02.942    "method": "bdev_nvme_attach_controller"
00:28:02.942  },{
00:28:02.942    "params": {
00:28:02.942      "name": "Nvme3",
00:28:02.942      "trtype": "rdma",
00:28:02.942      "traddr": "192.168.100.8",
00:28:02.942      "adrfam": "ipv4",
00:28:02.942      "trsvcid": "4420",
00:28:02.942      "subnqn": "nqn.2016-06.io.spdk:cnode3",
00:28:02.942      "hostnqn": "nqn.2016-06.io.spdk:host3",
00:28:02.942      "hdgst": false,
00:28:02.942      "ddgst": false
00:28:02.942    },
00:28:02.942    "method": "bdev_nvme_attach_controller"
00:28:02.942  },{
00:28:02.942    "params": {
00:28:02.942      "name": "Nvme4",
00:28:02.942      "trtype": "rdma",
00:28:02.942      "traddr": "192.168.100.8",
00:28:02.942      "adrfam": "ipv4",
00:28:02.942      "trsvcid": "4420",
00:28:02.942      "subnqn": "nqn.2016-06.io.spdk:cnode4",
00:28:02.942      "hostnqn": "nqn.2016-06.io.spdk:host4",
00:28:02.942      "hdgst": false,
00:28:02.942      "ddgst": false
00:28:02.942    },
00:28:02.942    "method": "bdev_nvme_attach_controller"
00:28:02.942  },{
00:28:02.942    "params": {
00:28:02.942      "name": "Nvme5",
00:28:02.942      "trtype": "rdma",
00:28:02.942      "traddr": "192.168.100.8",
00:28:02.942      "adrfam": "ipv4",
00:28:02.942      "trsvcid": "4420",
00:28:02.942      "subnqn": "nqn.2016-06.io.spdk:cnode5",
00:28:02.942      "hostnqn": "nqn.2016-06.io.spdk:host5",
00:28:02.942      "hdgst": false,
00:28:02.942      "ddgst": false
00:28:02.942    },
00:28:02.942    "method": "bdev_nvme_attach_controller"
00:28:02.942  },{
00:28:02.942    "params": {
00:28:02.942      "name": "Nvme6",
00:28:02.942      "trtype": "rdma",
00:28:02.942      "traddr": "192.168.100.8",
00:28:02.942      "adrfam": "ipv4",
00:28:02.942      "trsvcid": "4420",
00:28:02.942      "subnqn": "nqn.2016-06.io.spdk:cnode6",
00:28:02.942      "hostnqn": "nqn.2016-06.io.spdk:host6",
00:28:02.942      "hdgst": false,
00:28:02.942      "ddgst": false
00:28:02.942    },
00:28:02.942    "method": "bdev_nvme_attach_controller"
00:28:02.942  },{
00:28:02.942    "params": {
00:28:02.942      "name": "Nvme7",
00:28:02.942      "trtype": "rdma",
00:28:02.942      "traddr": "192.168.100.8",
00:28:02.942      "adrfam": "ipv4",
00:28:02.942      "trsvcid": "4420",
00:28:02.942      "subnqn": "nqn.2016-06.io.spdk:cnode7",
00:28:02.942      "hostnqn": "nqn.2016-06.io.spdk:host7",
00:28:02.942      "hdgst": false,
00:28:02.942      "ddgst": false
00:28:02.942    },
00:28:02.942    "method": "bdev_nvme_attach_controller"
00:28:02.942  },{
00:28:02.942    "params": {
00:28:02.942      "name": "Nvme8",
00:28:02.942      "trtype": "rdma",
00:28:02.942      "traddr": "192.168.100.8",
00:28:02.942      "adrfam": "ipv4",
00:28:02.942      "trsvcid": "4420",
00:28:02.942      "subnqn": "nqn.2016-06.io.spdk:cnode8",
00:28:02.942      "hostnqn": "nqn.2016-06.io.spdk:host8",
00:28:02.942      "hdgst": false,
00:28:02.942      "ddgst": false
00:28:02.942    },
00:28:02.942    "method": "bdev_nvme_attach_controller"
00:28:02.942  },{
00:28:02.942    "params": {
00:28:02.942      "name": "Nvme9",
00:28:02.942      "trtype": "rdma",
00:28:02.942      "traddr": "192.168.100.8",
00:28:02.942      "adrfam": "ipv4",
00:28:02.942      "trsvcid": "4420",
00:28:02.942      "subnqn": "nqn.2016-06.io.spdk:cnode9",
00:28:02.942      "hostnqn": "nqn.2016-06.io.spdk:host9",
00:28:02.942      "hdgst": false,
00:28:02.942      "ddgst": false
00:28:02.942    },
00:28:02.942    "method": "bdev_nvme_attach_controller"
00:28:02.942  },{
00:28:02.942    "params": {
00:28:02.942      "name": "Nvme10",
00:28:02.942      "trtype": "rdma",
00:28:02.942      "traddr": "192.168.100.8",
00:28:02.942      "adrfam": "ipv4",
00:28:02.942      "trsvcid": "4420",
00:28:02.942      "subnqn": "nqn.2016-06.io.spdk:cnode10",
00:28:02.942      "hostnqn": "nqn.2016-06.io.spdk:host10",
00:28:02.942      "hdgst": false,
00:28:02.942      "ddgst": false
00:28:02.942    },
00:28:02.942    "method": "bdev_nvme_attach_controller"
00:28:02.942  }'
00:28:02.942  [2024-12-14 13:54:02.585761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:28:03.200  [2024-12-14 13:54:02.690085] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:28:04.134  Running I/O for 10 seconds...
00:28:04.134   13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:28:04.134   13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0
00:28:04.134   13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init
00:28:04.134   13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:04.134   13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x
00:28:04.393   13:54:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:04.393   13:54:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:28:04.393   13:54:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1
00:28:04.393   13:54:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']'
00:28:04.393   13:54:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']'
00:28:04.393   13:54:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1
00:28:04.393   13:54:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i
00:28:04.393   13:54:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 ))
00:28:04.393   13:54:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 ))
00:28:04.393    13:54:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1
00:28:04.393    13:54:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops'
00:28:04.393    13:54:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:04.393    13:54:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x
00:28:04.651    13:54:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:04.651   13:54:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3
00:28:04.651   13:54:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']'
00:28:04.651   13:54:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25
00:28:04.909   13:54:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- ))
00:28:04.909   13:54:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 ))
00:28:04.909    13:54:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1
00:28:04.909    13:54:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops'
00:28:04.909    13:54:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:04.909    13:54:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x
00:28:04.909    13:54:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:04.909   13:54:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=163
00:28:04.909   13:54:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 163 -ge 100 ']'
00:28:04.909   13:54:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0
00:28:04.909   13:54:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break
00:28:04.909   13:54:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0
00:28:04.909   13:54:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3429521
00:28:04.909   13:54:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3429521 ']'
00:28:04.909   13:54:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3429521
00:28:04.909    13:54:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname
00:28:04.909   13:54:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:28:04.909    13:54:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3429521
00:28:05.168   13:54:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:28:05.168   13:54:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:28:05.168   13:54:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3429521'
00:28:05.168  killing process with pid 3429521
00:28:05.168   13:54:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 3429521
00:28:05.168   13:54:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 3429521
00:28:06.104       2604.00 IOPS,   162.75 MiB/s
[2024-12-14T12:54:05.842Z] [2024-12-14 13:54:05.753361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:28:06.104  [2024-12-14 13:54:05.753434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.104  [2024-12-14 13:54:05.753453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:28:06.104  [2024-12-14 13:54:05.753466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.104  [2024-12-14 13:54:05.753479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:28:06.104  [2024-12-14 13:54:05.753491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.104  [2024-12-14 13:54:05.753504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 
00:28:06.104  [2024-12-14 13:54:05.753515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.104  [2024-12-14 13:54:05.755967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0
00:28:06.104  [2024-12-14 13:54:05.755993] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state.
00:28:06.104  [2024-12-14 13:54:05.756024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:28:06.104  [2024-12-14 13:54:05.756039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.104  [2024-12-14 13:54:05.756053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:28:06.104  [2024-12-14 13:54:05.756064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.104  [2024-12-14 13:54:05.756078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:28:06.104  [2024-12-14 13:54:05.756090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.104  [2024-12-14 13:54:05.756102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 
00:28:06.104  [2024-12-14 13:54:05.756114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.104  [2024-12-14 13:54:05.758457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0
00:28:06.104  [2024-12-14 13:54:05.758477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state.
00:28:06.104  [2024-12-14 13:54:05.758499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:28:06.104  [2024-12-14 13:54:05.758513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.104  [2024-12-14 13:54:05.758526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:28:06.104  [2024-12-14 13:54:05.758538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.104  [2024-12-14 13:54:05.758551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:28:06.104  [2024-12-14 13:54:05.758563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.104  [2024-12-14 13:54:05.758575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 
00:28:06.104  [2024-12-14 13:54:05.758587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.104  [2024-12-14 13:54:05.761005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0
00:28:06.104  [2024-12-14 13:54:05.761025] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state.
00:28:06.104  [2024-12-14 13:54:05.761048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:28:06.104  [2024-12-14 13:54:05.761061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.104  [2024-12-14 13:54:05.761074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:28:06.104  [2024-12-14 13:54:05.761086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.104  [2024-12-14 13:54:05.761098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:28:06.104  [2024-12-14 13:54:05.761110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.104  [2024-12-14 13:54:05.761122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 
00:28:06.104  [2024-12-14 13:54:05.761133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.104  [2024-12-14 13:54:05.763255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0
00:28:06.104  [2024-12-14 13:54:05.763274] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state.
00:28:06.104  [2024-12-14 13:54:05.763297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:28:06.104  [2024-12-14 13:54:05.763310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.104  [2024-12-14 13:54:05.763322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:28:06.104  [2024-12-14 13:54:05.763334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.104  [2024-12-14 13:54:05.763349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:28:06.104  [2024-12-14 13:54:05.763361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.104  [2024-12-14 13:54:05.763373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 
00:28:06.104  [2024-12-14 13:54:05.763384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.104  [2024-12-14 13:54:05.765564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0
00:28:06.104  [2024-12-14 13:54:05.765588] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state.
00:28:06.104  [2024-12-14 13:54:05.765614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:28:06.104  [2024-12-14 13:54:05.765631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.104  [2024-12-14 13:54:05.765648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:28:06.104  [2024-12-14 13:54:05.765663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.104  [2024-12-14 13:54:05.765679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:28:06.104  [2024-12-14 13:54:05.765694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.104  [2024-12-14 13:54:05.765710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 
00:28:06.104  [2024-12-14 13:54:05.765725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.104  [2024-12-14 13:54:05.768076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0
00:28:06.104  [2024-12-14 13:54:05.768099] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state.
00:28:06.104  [2024-12-14 13:54:05.768126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:28:06.105  [2024-12-14 13:54:05.768143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.105  [2024-12-14 13:54:05.768160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:28:06.105  [2024-12-14 13:54:05.768175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.105  [2024-12-14 13:54:05.768192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:28:06.105  [2024-12-14 13:54:05.768207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.105  [2024-12-14 13:54:05.768223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 
00:28:06.105  [2024-12-14 13:54:05.768238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.105  [2024-12-14 13:54:05.771017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0
00:28:06.105  [2024-12-14 13:54:05.771040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state.
00:28:06.105  [2024-12-14 13:54:05.771072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:28:06.105  [2024-12-14 13:54:05.771090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.105  [2024-12-14 13:54:05.771106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:28:06.105  [2024-12-14 13:54:05.771121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.105  [2024-12-14 13:54:05.771137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:28:06.105  [2024-12-14 13:54:05.771152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.105  [2024-12-14 13:54:05.771168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 
00:28:06.105  [2024-12-14 13:54:05.771184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.105  [2024-12-14 13:54:05.773329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0
00:28:06.105  [2024-12-14 13:54:05.773351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state.
00:28:06.105  [2024-12-14 13:54:05.773376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:28:06.105  [2024-12-14 13:54:05.773393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.105  [2024-12-14 13:54:05.773418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:28:06.105  [2024-12-14 13:54:05.773433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.105  [2024-12-14 13:54:05.773449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:28:06.105  [2024-12-14 13:54:05.773464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.105  [2024-12-14 13:54:05.773480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 
00:28:06.105  [2024-12-14 13:54:05.773495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.105  [2024-12-14 13:54:05.776001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 0
00:28:06.105  [2024-12-14 13:54:05.776024] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state.
00:28:06.105  [2024-12-14 13:54:05.776052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:28:06.105  [2024-12-14 13:54:05.776069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.105  [2024-12-14 13:54:05.776086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:28:06.105  [2024-12-14 13:54:05.776103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.105  [2024-12-14 13:54:05.776119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:28:06.105  [2024-12-14 13:54:05.776139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.105  [2024-12-14 13:54:05.776155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 
00:28:06.105  [2024-12-14 13:54:05.776171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.105  [2024-12-14 13:54:05.778730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0
00:28:06.105  [2024-12-14 13:54:05.778755] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state.
00:28:06.105  [2024-12-14 13:54:05.781398] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress.
00:28:06.105  [2024-12-14 13:54:05.783896] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress.
00:28:06.105  [2024-12-14 13:54:05.786557] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress.
00:28:06.105  [2024-12-14 13:54:05.789074] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress.
00:28:06.105  [2024-12-14 13:54:05.791321] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress.
00:28:06.105  [2024-12-14 13:54:05.793682] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress.
00:28:06.105  [2024-12-14 13:54:05.796154] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress.
00:28:06.105  [2024-12-14 13:54:05.798596] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress.
00:28:06.105  [2024-12-14 13:54:05.798709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002adf300 len:0x10000 key:0x183e00
00:28:06.105  [2024-12-14 13:54:05.798737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.105  [2024-12-14 13:54:05.798767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002acf240 len:0x10000 key:0x183e00
00:28:06.105  [2024-12-14 13:54:05.798785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.105  [2024-12-14 13:54:05.798808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002abf180 len:0x10000 key:0x183e00
00:28:06.105  [2024-12-14 13:54:05.798825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.105  [2024-12-14 13:54:05.798848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002aaf0c0 len:0x10000 key:0x183e00
00:28:06.105  [2024-12-14 13:54:05.798865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.105  [2024-12-14 13:54:05.798887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a9f000 len:0x10000 key:0x183e00
00:28:06.105  [2024-12-14 13:54:05.798904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.105  [2024-12-14 13:54:05.798926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a8ef40 len:0x10000 key:0x183e00
00:28:06.105  [2024-12-14 13:54:05.798957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.105  [2024-12-14 13:54:05.798979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a7ee80 len:0x10000 key:0x183e00
00:28:06.105  [2024-12-14 13:54:05.798996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.105  [2024-12-14 13:54:05.799019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a6edc0 len:0x10000 key:0x183e00
00:28:06.105  [2024-12-14 13:54:05.799036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.105  [2024-12-14 13:54:05.799058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a5ed00 len:0x10000 key:0x183e00
00:28:06.105  [2024-12-14 13:54:05.799075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.105  [2024-12-14 13:54:05.799097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a4ec40 len:0x10000 key:0x183e00
00:28:06.105  [2024-12-14 13:54:05.799114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.105  [2024-12-14 13:54:05.799136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a3eb80 len:0x10000 key:0x183e00
00:28:06.105  [2024-12-14 13:54:05.799154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.105  [2024-12-14 13:54:05.799176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a2eac0 len:0x10000 key:0x183e00
00:28:06.105  [2024-12-14 13:54:05.799193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.105  [2024-12-14 13:54:05.799216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a1ea00 len:0x10000 key:0x183e00
00:28:06.105  [2024-12-14 13:54:05.799232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.105  [2024-12-14 13:54:05.799254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a0e940 len:0x10000 key:0x183e00
00:28:06.105  [2024-12-14 13:54:05.799271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.105  [2024-12-14 13:54:05.799294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002deffc0 len:0x10000 key:0x184000
00:28:06.105  [2024-12-14 13:54:05.799311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.105  [2024-12-14 13:54:05.799333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002ddff00 len:0x10000 key:0x184000
00:28:06.105  [2024-12-14 13:54:05.799350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.106  [2024-12-14 13:54:05.799372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002dcfe40 len:0x10000 key:0x184000
00:28:06.106  [2024-12-14 13:54:05.799389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.106  [2024-12-14 13:54:05.799416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002dbfd80 len:0x10000 key:0x184000
00:28:06.106  [2024-12-14 13:54:05.799433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.106  [2024-12-14 13:54:05.799456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002dafcc0 len:0x10000 key:0x184000
00:28:06.106  [2024-12-14 13:54:05.799473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.106  [2024-12-14 13:54:05.799495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d9fc00 len:0x10000 key:0x184000
00:28:06.106  [2024-12-14 13:54:05.799512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.106  [2024-12-14 13:54:05.799534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d8fb40 len:0x10000 key:0x184000
00:28:06.106  [2024-12-14 13:54:05.799551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.106  [2024-12-14 13:54:05.799574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d7fa80 len:0x10000 key:0x184000
00:28:06.106  [2024-12-14 13:54:05.799591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.106  [2024-12-14 13:54:05.799613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d6f9c0 len:0x10000 key:0x184000
00:28:06.106  [2024-12-14 13:54:05.799630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.106  [2024-12-14 13:54:05.799652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d5f900 len:0x10000 key:0x184000
00:28:06.106  [2024-12-14 13:54:05.799669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.106  [2024-12-14 13:54:05.799691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d4f840 len:0x10000 key:0x184000
00:28:06.106  [2024-12-14 13:54:05.799709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.106  [2024-12-14 13:54:05.799730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d3f780 len:0x10000 key:0x184000
00:28:06.106  [2024-12-14 13:54:05.799748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.106  [2024-12-14 13:54:05.799769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d2f6c0 len:0x10000 key:0x184000
00:28:06.106  [2024-12-14 13:54:05.799786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.106  [2024-12-14 13:54:05.799808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d1f600 len:0x10000 key:0x184000
00:28:06.106  [2024-12-14 13:54:05.799825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.106  [2024-12-14 13:54:05.799846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d0f540 len:0x10000 key:0x184000
00:28:06.106  [2024-12-14 13:54:05.799866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.106  [2024-12-14 13:54:05.799887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002cff480 len:0x10000 key:0x184000
00:28:06.106  [2024-12-14 13:54:05.799904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.106  [2024-12-14 13:54:05.799926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002cef3c0 len:0x10000 key:0x184000
00:28:06.106  [2024-12-14 13:54:05.799950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.106  [2024-12-14 13:54:05.799972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002cdf300 len:0x10000 key:0x184000
00:28:06.106  [2024-12-14 13:54:05.799989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.106  [2024-12-14 13:54:05.800012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002ccf240 len:0x10000 key:0x184000
00:28:06.106  [2024-12-14 13:54:05.800028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.106  [2024-12-14 13:54:05.800051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002cbf180 len:0x10000 key:0x184000
00:28:06.106  [2024-12-14 13:54:05.800068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.106  [2024-12-14 13:54:05.800090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002caf0c0 len:0x10000 key:0x184000
00:28:06.106  [2024-12-14 13:54:05.800107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.106  [2024-12-14 13:54:05.800128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c9f000 len:0x10000 key:0x184000
00:28:06.106  [2024-12-14 13:54:05.800145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.106  [2024-12-14 13:54:05.800168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c8ef40 len:0x10000 key:0x184000
00:28:06.106  [2024-12-14 13:54:05.800185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.106  [2024-12-14 13:54:05.800207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c7ee80 len:0x10000 key:0x184000
00:28:06.106  [2024-12-14 13:54:05.800224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.106  [2024-12-14 13:54:05.800246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c6edc0 len:0x10000 key:0x184000
00:28:06.106  [2024-12-14 13:54:05.800262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.106  [2024-12-14 13:54:05.800284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c5ed00 len:0x10000 key:0x184000
00:28:06.106  [2024-12-14 13:54:05.800302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.106  [2024-12-14 13:54:05.800325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c4ec40 len:0x10000 key:0x184000
00:28:06.106  [2024-12-14 13:54:05.800343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.106  [2024-12-14 13:54:05.800364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c3eb80 len:0x10000 key:0x184000
00:28:06.106  [2024-12-14 13:54:05.800381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.106  [2024-12-14 13:54:05.800403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c2eac0 len:0x10000 key:0x184000
00:28:06.106  [2024-12-14 13:54:05.800419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.106  [2024-12-14 13:54:05.800441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c1ea00 len:0x10000 key:0x184000
00:28:06.106  [2024-12-14 13:54:05.800459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.106  [2024-12-14 13:54:05.800481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c0e940 len:0x10000 key:0x184000
00:28:06.106  [2024-12-14 13:54:05.800498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.106  [2024-12-14 13:54:05.800520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002feffc0 len:0x10000 key:0x184300
00:28:06.106  [2024-12-14 13:54:05.800537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.106  [2024-12-14 13:54:05.800558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002fdff00 len:0x10000 key:0x184300
00:28:06.106  [2024-12-14 13:54:05.800575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.106  [2024-12-14 13:54:05.800598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002fcfe40 len:0x10000 key:0x184300
00:28:06.106  [2024-12-14 13:54:05.800616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.106  [2024-12-14 13:54:05.800638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002fbfd80 len:0x10000 key:0x184300
00:28:06.106  [2024-12-14 13:54:05.800655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.106  [2024-12-14 13:54:05.800677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002fafcc0 len:0x10000 key:0x184300
00:28:06.106  [2024-12-14 13:54:05.800695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.106  [2024-12-14 13:54:05.800717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f9fc00 len:0x10000 key:0x184300
00:28:06.106  [2024-12-14 13:54:05.800734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.106  [2024-12-14 13:54:05.800766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f8fb40 len:0x10000 key:0x184300
00:28:06.106  [2024-12-14 13:54:05.800785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.106  [2024-12-14 13:54:05.800808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f7fa80 len:0x10000 key:0x184300
00:28:06.106  [2024-12-14 13:54:05.800825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.106  [2024-12-14 13:54:05.800847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f6f9c0 len:0x10000 key:0x184300
00:28:06.107  [2024-12-14 13:54:05.800864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.107  [2024-12-14 13:54:05.800886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f5f900 len:0x10000 key:0x184300
00:28:06.107  [2024-12-14 13:54:05.800903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.107  [2024-12-14 13:54:05.800925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f4f840 len:0x10000 key:0x184300
00:28:06.107  [2024-12-14 13:54:05.800948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.107  [2024-12-14 13:54:05.800970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f3f780 len:0x10000 key:0x184300
00:28:06.107  [2024-12-14 13:54:05.800987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.107  [2024-12-14 13:54:05.801009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f2f6c0 len:0x10000 key:0x184300
00:28:06.107  [2024-12-14 13:54:05.801026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.107  [2024-12-14 13:54:05.801048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f1f600 len:0x10000 key:0x184300
00:28:06.107  [2024-12-14 13:54:05.801065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.107  [2024-12-14 13:54:05.801087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f0f540 len:0x10000 key:0x184300
00:28:06.107  [2024-12-14 13:54:05.801104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.107  [2024-12-14 13:54:05.801126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002eff480 len:0x10000 key:0x184300
00:28:06.107  [2024-12-14 13:54:05.801143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.107  [2024-12-14 13:54:05.801165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002eef3c0 len:0x10000 key:0x184300
00:28:06.107  [2024-12-14 13:54:05.801182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.107  [2024-12-14 13:54:05.801203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002edf300 len:0x10000 key:0x184300
00:28:06.107  [2024-12-14 13:54:05.801220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.107  [2024-12-14 13:54:05.801244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002aef3c0 len:0x10000 key:0x183e00
00:28:06.107  [2024-12-14 13:54:05.801261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.107  [2024-12-14 13:54:05.804464] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress.
00:28:06.107  [2024-12-14 13:54:05.804501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002ebf180 len:0x10000 key:0x184300
00:28:06.107  [2024-12-14 13:54:05.804520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.107  [2024-12-14 13:54:05.804550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002eaf0c0 len:0x10000 key:0x184300
00:28:06.107  [2024-12-14 13:54:05.804568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.107  [2024-12-14 13:54:05.804591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e9f000 len:0x10000 key:0x184300
00:28:06.107  [2024-12-14 13:54:05.804608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.107  [2024-12-14 13:54:05.804631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e8ef40 len:0x10000 key:0x184300
00:28:06.107  [2024-12-14 13:54:05.804648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.107  [2024-12-14 13:54:05.804671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e7ee80 len:0x10000 key:0x184300
00:28:06.107  [2024-12-14 13:54:05.804688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.107  [2024-12-14 13:54:05.804710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e6edc0 len:0x10000 key:0x184300
00:28:06.107  [2024-12-14 13:54:05.804727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.107  [2024-12-14 13:54:05.804750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e5ed00 len:0x10000 key:0x184300
00:28:06.107  [2024-12-14 13:54:05.804767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.107  [2024-12-14 13:54:05.804789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e4ec40 len:0x10000 key:0x184300
00:28:06.107  [2024-12-14 13:54:05.804807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.107  [2024-12-14 13:54:05.804828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e3eb80 len:0x10000 key:0x184300
00:28:06.107  [2024-12-14 13:54:05.804845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.107  [2024-12-14 13:54:05.804867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e2eac0 len:0x10000 key:0x184300
00:28:06.107  [2024-12-14 13:54:05.804887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.107  [2024-12-14 13:54:05.804909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e1ea00 len:0x10000 key:0x184300
00:28:06.107  [2024-12-14 13:54:05.804926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.107  [2024-12-14 13:54:05.804957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e0e940 len:0x10000 key:0x184300
00:28:06.107  [2024-12-14 13:54:05.804975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.107  [2024-12-14 13:54:05.804997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031effc0 len:0x10000 key:0x184400
00:28:06.107  [2024-12-14 13:54:05.805014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.107  [2024-12-14 13:54:05.805037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:26240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031dff00 len:0x10000 key:0x184400
00:28:06.107  [2024-12-14 13:54:05.805054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.107  [2024-12-14 13:54:05.805076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:26368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031cfe40 len:0x10000 key:0x184400
00:28:06.107  [2024-12-14 13:54:05.805094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.107  [2024-12-14 13:54:05.805116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:26496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031bfd80 len:0x10000 key:0x184400
00:28:06.107  [2024-12-14 13:54:05.805133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.107  [2024-12-14 13:54:05.805155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031afcc0 len:0x10000 key:0x184400
00:28:06.107  [2024-12-14 13:54:05.805172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.107  [2024-12-14 13:54:05.805194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100319fc00 len:0x10000 key:0x184400
00:28:06.107  [2024-12-14 13:54:05.805211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.107  [2024-12-14 13:54:05.805233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100318fb40 len:0x10000 key:0x184400
00:28:06.107  [2024-12-14 13:54:05.805250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.107  [2024-12-14 13:54:05.805271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:27008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100317fa80 len:0x10000 key:0x184400
00:28:06.107  [2024-12-14 13:54:05.805288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.107  [2024-12-14 13:54:05.805310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:27136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100316f9c0 len:0x10000 key:0x184400
00:28:06.107  [2024-12-14 13:54:05.805328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.107  [2024-12-14 13:54:05.805350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:27264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100315f900 len:0x10000 key:0x184400
00:28:06.107  [2024-12-14 13:54:05.805370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.107  [2024-12-14 13:54:05.805392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:27392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100314f840 len:0x10000 key:0x184400
00:28:06.107  [2024-12-14 13:54:05.805409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.107  [2024-12-14 13:54:05.805431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:27520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100313f780 len:0x10000 key:0x184400
00:28:06.107  [2024-12-14 13:54:05.805448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.107  [2024-12-14 13:54:05.805470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:27648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100312f6c0 len:0x10000 key:0x184400
00:28:06.107  [2024-12-14 13:54:05.805490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.107  [2024-12-14 13:54:05.805513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100311f600 len:0x10000 key:0x184400
00:28:06.107  [2024-12-14 13:54:05.805530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.107  [2024-12-14 13:54:05.805552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100310f540 len:0x10000 key:0x184400
00:28:06.107  [2024-12-14 13:54:05.805569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.108  [2024-12-14 13:54:05.805591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:28032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030ff480 len:0x10000 key:0x184400
00:28:06.108  [2024-12-14 13:54:05.805608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.108  [2024-12-14 13:54:05.805631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:28160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030ef3c0 len:0x10000 key:0x184400
00:28:06.108  [2024-12-14 13:54:05.805649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.108  [2024-12-14 13:54:05.805671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:28288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030df300 len:0x10000 key:0x184400
00:28:06.108  [2024-12-14 13:54:05.805688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.108  [2024-12-14 13:54:05.805709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030cf240 len:0x10000 key:0x184400
00:28:06.108  [2024-12-14 13:54:05.805727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.108  [2024-12-14 13:54:05.805748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:28544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030bf180 len:0x10000 key:0x184400
00:28:06.108  [2024-12-14 13:54:05.805767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.108  [2024-12-14 13:54:05.805788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:28672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030af0c0 len:0x10000 key:0x184400
00:28:06.108  [2024-12-14 13:54:05.805807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.108  [2024-12-14 13:54:05.805829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100309f000 len:0x10000 key:0x184400
00:28:06.108  [2024-12-14 13:54:05.805846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.108  [2024-12-14 13:54:05.805868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100308ef40 len:0x10000 key:0x184400
00:28:06.108  [2024-12-14 13:54:05.805885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.108  [2024-12-14 13:54:05.805907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100307ee80 len:0x10000 key:0x184400
00:28:06.108  [2024-12-14 13:54:05.805924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.108  [2024-12-14 13:54:05.805952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100306edc0 len:0x10000 key:0x184400
00:28:06.108  [2024-12-14 13:54:05.805970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.108  [2024-12-14 13:54:05.805992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100305ed00 len:0x10000 key:0x184400
00:28:06.108  [2024-12-14 13:54:05.806010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.108  [2024-12-14 13:54:05.806032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:29440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100304ec40 len:0x10000 key:0x184400
00:28:06.108  [2024-12-14 13:54:05.806049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.108  [2024-12-14 13:54:05.806071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100303eb80 len:0x10000 key:0x184400
00:28:06.108  [2024-12-14 13:54:05.806088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.108  [2024-12-14 13:54:05.806110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100302eac0 len:0x10000 key:0x184400
00:28:06.108  [2024-12-14 13:54:05.806127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.108  [2024-12-14 13:54:05.806148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100301ea00 len:0x10000 key:0x184400
00:28:06.108  [2024-12-14 13:54:05.806165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.108  [2024-12-14 13:54:05.806187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100300e940 len:0x10000 key:0x184400
00:28:06.108  [2024-12-14 13:54:05.806204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.108  [2024-12-14 13:54:05.806226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033effc0 len:0x10000 key:0x184700
00:28:06.108  [2024-12-14 13:54:05.806243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.108  [2024-12-14 13:54:05.806266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033dff00 len:0x10000 key:0x184700
00:28:06.108  [2024-12-14 13:54:05.806284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.108  [2024-12-14 13:54:05.806305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033cfe40 len:0x10000 key:0x184700
00:28:06.108  [2024-12-14 13:54:05.806323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.108  [2024-12-14 13:54:05.806344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:30464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033bfd80 len:0x10000 key:0x184700
00:28:06.108  [2024-12-14 13:54:05.806361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.108  [2024-12-14 13:54:05.806384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033afcc0 len:0x10000 key:0x184700
00:28:06.108  [2024-12-14 13:54:05.806400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.108  [2024-12-14 13:54:05.806422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100339fc00 len:0x10000 key:0x184700
00:28:06.108  [2024-12-14 13:54:05.806440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.108  [2024-12-14 13:54:05.806462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100338fb40 len:0x10000 key:0x184700
00:28:06.108  [2024-12-14 13:54:05.806479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.108  [2024-12-14 13:54:05.806500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100337fa80 len:0x10000 key:0x184700
00:28:06.108  [2024-12-14 13:54:05.806546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.108  [2024-12-14 13:54:05.806568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100336f9c0 len:0x10000 key:0x184700
00:28:06.108  [2024-12-14 13:54:05.806585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.108  [2024-12-14 13:54:05.806608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100335f900 len:0x10000 key:0x184700
00:28:06.108  [2024-12-14 13:54:05.806624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.108  [2024-12-14 13:54:05.806645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100334f840 len:0x10000 key:0x184700
00:28:06.108  [2024-12-14 13:54:05.806662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.108  [2024-12-14 13:54:05.806684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100333f780 len:0x10000 key:0x184700
00:28:06.108  [2024-12-14 13:54:05.806702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.108  [2024-12-14 13:54:05.806724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100332f6c0 len:0x10000 key:0x184700
00:28:06.108  [2024-12-14 13:54:05.806743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.108  [2024-12-14 13:54:05.806765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100331f600 len:0x10000 key:0x184700
00:28:06.108  [2024-12-14 13:54:05.806782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.108  [2024-12-14 13:54:05.806804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100330f540 len:0x10000 key:0x184700
00:28:06.108  [2024-12-14 13:54:05.806821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.108  [2024-12-14 13:54:05.806843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032ff480 len:0x10000 key:0x184700
00:28:06.108  [2024-12-14 13:54:05.806860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.109  [2024-12-14 13:54:05.806882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032ef3c0 len:0x10000 key:0x184700
00:28:06.109  [2024-12-14 13:54:05.806899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.109  [2024-12-14 13:54:05.806921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032df300 len:0x10000 key:0x184700
00:28:06.109  [2024-12-14 13:54:05.806948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.109  [2024-12-14 13:54:05.806969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032cf240 len:0x10000 key:0x184700
00:28:06.109  [2024-12-14 13:54:05.806986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.109  [2024-12-14 13:54:05.807009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032bf180 len:0x10000 key:0x184700
00:28:06.109  [2024-12-14 13:54:05.807026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.109  [2024-12-14 13:54:05.807048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002ecf240 len:0x10000 key:0x184300
00:28:06.109  [2024-12-14 13:54:05.807065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:06.368  [2024-12-14 13:54:05.838251] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress.
00:28:06.368  [2024-12-14 13:54:05.838353] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress.
00:28:06.368  [2024-12-14 13:54:05.838375] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress.
00:28:06.368  [2024-12-14 13:54:05.838394] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress.
00:28:06.368  [2024-12-14 13:54:05.838410] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress.
00:28:06.368  [2024-12-14 13:54:05.838426] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress.
00:28:06.368  [2024-12-14 13:54:05.838445] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress.
00:28:06.368  [2024-12-14 13:54:05.838461] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress.
00:28:06.368  [2024-12-14 13:54:05.838477] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress.
00:28:06.368  [2024-12-14 13:54:05.838494] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress.
00:28:06.368  [2024-12-14 13:54:05.838509] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress.
00:28:06.368  [2024-12-14 13:54:05.845473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller
00:28:06.368  [2024-12-14 13:54:05.845519] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller
00:28:06.368  [2024-12-14 13:54:05.846465] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller
00:28:06.368  [2024-12-14 13:54:05.846500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller
00:28:06.368  [2024-12-14 13:54:05.846517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller
00:28:06.368  [2024-12-14 13:54:05.846534] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller
00:28:06.368  [2024-12-14 13:54:05.849961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller
00:28:06.368  task offset: 36864 on job bdev=Nvme1n1 fails
00:28:06.368  
00:28:06.368                                                                                                  Latency(us)
00:28:06.368  
[2024-12-14T12:54:06.106Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:28:06.368  Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:28:06.368  Job: Nvme1n1 ended in about 1.97 seconds with error
00:28:06.368  	 Verification LBA range: start 0x0 length 0x400
00:28:06.368  	 Nvme1n1             :       1.97     129.96       8.12      32.49     0.00  390043.07   40475.03 1060320.05
00:28:06.368  Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:28:06.368  Job: Nvme2n1 ended in about 1.97 seconds with error
00:28:06.368  	 Verification LBA range: start 0x0 length 0x400
00:28:06.368  	 Nvme2n1             :       1.97     132.44       8.28      32.48     0.00  380665.84    4587.52 1060320.05
00:28:06.368  Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:28:06.368  Job: Nvme3n1 ended in about 1.97 seconds with error
00:28:06.368  	 Verification LBA range: start 0x0 length 0x400
00:28:06.368  	 Nvme3n1             :       1.97     129.85       8.12      32.46     0.00  383463.59   51380.22 1053609.16
00:28:06.368  Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:28:06.368  Job: Nvme4n1 ended in about 1.97 seconds with error
00:28:06.368  	 Verification LBA range: start 0x0 length 0x400
00:28:06.368  	 Nvme4n1             :       1.97     144.49       9.03      32.45     0.00  348665.87    7864.32 1053609.16
00:28:06.368  Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:28:06.368  Job: Nvme5n1 ended in about 1.97 seconds with error
00:28:06.368  	 Verification LBA range: start 0x0 length 0x400
00:28:06.368  	 Nvme5n1             :       1.97     133.79       8.36      32.43     0.00  367696.04   12792.63 1053609.16
00:28:06.368  Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:28:06.368  Job: Nvme6n1 ended in about 1.97 seconds with error
00:28:06.368  	 Verification LBA range: start 0x0 length 0x400
00:28:06.368  	 Nvme6n1             :       1.97     141.84       8.86      32.42     0.00  347588.01   14155.78 1053609.16
00:28:06.368  Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:28:06.368  Job: Nvme7n1 ended in about 1.97 seconds with error
00:28:06.368  	 Verification LBA range: start 0x0 length 0x400
00:28:06.368  	 Nvme7n1             :       1.97     145.83       9.11      32.41     0.00  336522.74   22649.24 1046898.28
00:28:06.368  Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:28:06.368  Job: Nvme8n1 ended in about 1.98 seconds with error
00:28:06.368  	 Verification LBA range: start 0x0 length 0x400
00:28:06.368  	 Nvme8n1             :       1.98     134.63       8.41      32.39     0.00  355917.83   31876.71 1046898.28
00:28:06.368  Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:28:06.368  Job: Nvme9n1 ended in about 1.93 seconds with error
00:28:06.368  	 Verification LBA range: start 0x0 length 0x400
00:28:06.368  	 Nvme9n1             :       1.93     132.47       8.28      33.12     0.00  357066.34   59978.55 1087163.60
00:28:06.368  Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:28:06.368  Job: Nvme10n1 ended in about 1.94 seconds with error
00:28:06.368  	 Verification LBA range: start 0x0 length 0x400
00:28:06.368  	 Nvme10n1            :       1.94      99.06       6.19      33.02     0.00  443466.55   60397.98 1080452.71
00:28:06.368  
[2024-12-14T12:54:06.106Z]  ===================================================================================================================
00:28:06.368  
[2024-12-14T12:54:06.106Z]  Total                       :               1324.35      82.77     325.66     0.00  368925.72    4587.52 1087163.60
00:28:06.368  [2024-12-14 13:54:05.977839] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:28:06.368  [2024-12-14 13:54:05.977910] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller
00:28:06.368  [2024-12-14 13:54:05.977951] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller
00:28:06.368  [2024-12-14 13:54:05.977967] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller
00:28:06.368  [2024-12-14 13:54:05.988532] nvme_rdma.c: 570:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8)
00:28:06.368  [2024-12-14 13:54:05.988563] nvme_rdma.c:1111:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74
00:28:06.368  [2024-12-14 13:54:05.988576] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000105ff800
00:28:06.368  [2024-12-14 13:54:05.988686] nvme_rdma.c: 570:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8)
00:28:06.368  [2024-12-14 13:54:05.988699] nvme_rdma.c:1111:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74
00:28:06.368  [2024-12-14 13:54:05.988708] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000177e30c0
00:28:06.368  [2024-12-14 13:54:05.993943] nvme_rdma.c: 570:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8)
00:28:06.368  [2024-12-14 13:54:05.993967] nvme_rdma.c:1111:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74
00:28:06.368  [2024-12-14 13:54:05.993978] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000177d6c00
00:28:06.368  [2024-12-14 13:54:05.994072] nvme_rdma.c: 570:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8)
00:28:06.368  [2024-12-14 13:54:05.994086] nvme_rdma.c:1111:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74
00:28:06.368  [2024-12-14 13:54:05.994095] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000177cd8c0
00:28:06.368  [2024-12-14 13:54:05.994185] nvme_rdma.c: 570:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8)
00:28:06.368  [2024-12-14 13:54:05.994198] nvme_rdma.c:1111:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74
00:28:06.368  [2024-12-14 13:54:05.994207] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000177be9c0
00:28:06.368  [2024-12-14 13:54:05.994303] nvme_rdma.c: 570:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8)
00:28:06.368  [2024-12-14 13:54:05.994316] nvme_rdma.c:1111:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74
00:28:06.368  [2024-12-14 13:54:05.994324] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000177ab000
00:28:06.368  [2024-12-14 13:54:05.995288] nvme_rdma.c: 570:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8)
00:28:06.368  [2024-12-14 13:54:05.995307] nvme_rdma.c:1111:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74
00:28:06.368  [2024-12-14 13:54:05.995316] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200017799080
00:28:06.368  [2024-12-14 13:54:05.995417] nvme_rdma.c: 570:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8)
00:28:06.368  [2024-12-14 13:54:05.995429] nvme_rdma.c:1111:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74
00:28:06.368  [2024-12-14 13:54:05.995439] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200017752100
00:28:06.368  [2024-12-14 13:54:05.995514] nvme_rdma.c: 570:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8)
00:28:06.368  [2024-12-14 13:54:05.995527] nvme_rdma.c:1111:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74
00:28:06.368  [2024-12-14 13:54:05.995536] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001778f200
00:28:06.368  [2024-12-14 13:54:05.995618] nvme_rdma.c: 570:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8)
00:28:06.368  [2024-12-14 13:54:05.995630] nvme_rdma.c:1111:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74
00:28:06.368  [2024-12-14 13:54:05.995639] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001777f940
00:28:07.304  [2024-12-14 13:54:06.992863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0
00:28:07.304  [2024-12-14 13:54:06.992914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state.
00:28:07.304  [2024-12-14 13:54:06.994494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0
00:28:07.304  [2024-12-14 13:54:06.994513] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state.
00:28:07.304  [2024-12-14 13:54:06.994568] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state
00:28:07.304  [2024-12-14 13:54:06.994582] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed
00:28:07.304  [2024-12-14 13:54:06.994599] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state
00:28:07.304  [2024-12-14 13:54:06.994619] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed.
00:28:07.304  [2024-12-14 13:54:06.994642] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state
00:28:07.304  [2024-12-14 13:54:06.994653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed
00:28:07.304  [2024-12-14 13:54:06.994664] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] already in failed state
00:28:07.304  [2024-12-14 13:54:06.994676] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed.
00:28:07.304  [2024-12-14 13:54:06.998107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0
00:28:07.304  [2024-12-14 13:54:06.998134] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state.
00:28:07.304  [2024-12-14 13:54:06.999645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0
00:28:07.304  [2024-12-14 13:54:06.999662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state.
00:28:07.304  [2024-12-14 13:54:07.000912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0
00:28:07.304  [2024-12-14 13:54:07.000935] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state.
00:28:07.304  [2024-12-14 13:54:07.002220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0
00:28:07.304  [2024-12-14 13:54:07.002236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state.
00:28:07.304  [2024-12-14 13:54:07.003599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0
00:28:07.304  [2024-12-14 13:54:07.003615] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state.
00:28:07.304  [2024-12-14 13:54:07.004883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0
00:28:07.304  [2024-12-14 13:54:07.004899] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state.
00:28:07.304  [2024-12-14 13:54:07.006146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0
00:28:07.304  [2024-12-14 13:54:07.006162] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state.
00:28:07.304  [2024-12-14 13:54:07.007471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 0
00:28:07.304  [2024-12-14 13:54:07.007492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state.
00:28:07.304  [2024-12-14 13:54:07.007506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state
00:28:07.304  [2024-12-14 13:54:07.007521] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed
00:28:07.304  [2024-12-14 13:54:07.007536] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] already in failed state
00:28:07.304  [2024-12-14 13:54:07.007553] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed.
00:28:07.304  [2024-12-14 13:54:07.007575] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state
00:28:07.304  [2024-12-14 13:54:07.007589] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed
00:28:07.304  [2024-12-14 13:54:07.007603] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] already in failed state
00:28:07.304  [2024-12-14 13:54:07.007617] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed.
00:28:07.304  [2024-12-14 13:54:07.007637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state
00:28:07.304  [2024-12-14 13:54:07.007652] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed
00:28:07.304  [2024-12-14 13:54:07.007665] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] already in failed state
00:28:07.304  [2024-12-14 13:54:07.007680] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed.
00:28:07.304  [2024-12-14 13:54:07.007697] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state
00:28:07.304  [2024-12-14 13:54:07.007711] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed
00:28:07.304  [2024-12-14 13:54:07.007725] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] already in failed state
00:28:07.304  [2024-12-14 13:54:07.007743] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed.
00:28:07.304  [2024-12-14 13:54:07.007860] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state
00:28:07.304  [2024-12-14 13:54:07.007879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed
00:28:07.304  [2024-12-14 13:54:07.007893] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] already in failed state
00:28:07.304  [2024-12-14 13:54:07.007908] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed.
00:28:07.304  [2024-12-14 13:54:07.007926] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state
00:28:07.304  [2024-12-14 13:54:07.007948] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed
00:28:07.305  [2024-12-14 13:54:07.007962] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] already in failed state
00:28:07.305  [2024-12-14 13:54:07.007977] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed.
00:28:07.305  [2024-12-14 13:54:07.007995] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state
00:28:07.305  [2024-12-14 13:54:07.008009] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed
00:28:07.305  [2024-12-14 13:54:07.008023] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] already in failed state
00:28:07.305  [2024-12-14 13:54:07.008037] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed.
00:28:07.305  [2024-12-14 13:54:07.008055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state
00:28:07.305  [2024-12-14 13:54:07.008069] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed
00:28:07.305  [2024-12-14 13:54:07.008082] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] already in failed state
00:28:07.305  [2024-12-14 13:54:07.008096] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed.
00:28:08.680   13:54:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1
00:28:09.616   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 3429954
00:28:09.616   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0
00:28:09.616   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3429954
00:28:09.616   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait
00:28:09.616   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:28:09.616    13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait
00:28:09.616   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:28:09.616   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 3429954
00:28:09.616   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255
00:28:09.616   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:28:09.616   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127
00:28:09.616   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in
00:28:09.616   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1
00:28:09.616   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:28:09.616   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget
00:28:09.616   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state
00:28:09.616   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf
00:28:09.616   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt
00:28:09.616   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini
00:28:09.616   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup
00:28:09.616   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync
00:28:09.616   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']'
00:28:09.616   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']'
00:28:09.616   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e
00:28:09.616   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20}
00:28:09.616   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma
00:28:09.616  rmmod nvme_rdma
00:28:09.616  rmmod nvme_fabrics
00:28:09.616   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:28:09.616   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e
00:28:09.616   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0
00:28:09.616   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 3429521 ']'
00:28:09.616   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 3429521
00:28:09.616   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3429521 ']'
00:28:09.616   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3429521
00:28:09.616  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3429521) - No such process
00:28:09.616   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3429521 is not found'
00:28:09.616  Process with pid 3429521 is not found
00:28:09.616   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:28:09.616   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]]
00:28:09.616  
00:28:09.616  real	0m9.472s
00:28:09.616  user	0m33.680s
00:28:09.616  sys	0m1.954s
00:28:09.616   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable
00:28:09.616   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x
00:28:09.616  ************************************
00:28:09.616  END TEST nvmf_shutdown_tc3
00:28:09.616  ************************************
00:28:09.616   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ mlx5 == \e\8\1\0 ]]
00:28:09.616   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4
00:28:09.616   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:28:09.616   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable
00:28:09.616   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x
00:28:09.875  ************************************
00:28:09.875  START TEST nvmf_shutdown_tc4
00:28:09.875  ************************************
00:28:09.875   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4
00:28:09.875   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget
00:28:09.875   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit
00:28:09.875   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z rdma ']'
00:28:09.875   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:28:09.875   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs
00:28:09.875   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no
00:28:09.875   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns
00:28:09.875   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:28:09.875   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:28:09.875    13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:28:09.875   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:28:09.875   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:28:09.875   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable
00:28:09.875   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x
00:28:09.875   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:28:09.875   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=()
00:28:09.875   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs
00:28:09.875   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=()
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=()
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=()
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=()
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=()
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=()
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]]
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}")
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}")
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]]
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}")
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)'
00:28:09.876  Found 0000:d9:00.0 (0x15b3 - 0x1015)
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)'
00:28:09.876  Found 0000:d9:00.1 (0x15b3 - 0x1015)
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]]
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0'
00:28:09.876  Found net devices under 0000:d9:00.0: mlx_0_0
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1'
00:28:09.876  Found net devices under 0000:d9:00.1: mlx_0_1
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]]
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]]
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@448 -- # rdma_device_init
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@529 -- # load_ib_rdma_modules
00:28:09.876    13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@62 -- # uname
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']'
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@66 -- # modprobe ib_cm
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@67 -- # modprobe ib_core
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@68 -- # modprobe ib_umad
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@69 -- # modprobe ib_uverbs
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@70 -- # modprobe iw_cm
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@71 -- # modprobe rdma_cm
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@72 -- # modprobe rdma_ucm
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@530 -- # allocate_nic_ips
00:28:09.876   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR ))
00:28:09.876    13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # get_rdma_if_list
00:28:09.876    13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:28:09.876    13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:28:09.876     13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:28:09.876     13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:28:09.876    13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:28:09.876    13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:28:09.876    13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:28:09.876    13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:28:09.876    13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_0
00:28:09.876    13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2
00:28:09.876    13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:28:09.876    13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:28:09.876    13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:28:09.876    13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:28:09.876    13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:28:09.876    13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_1
00:28:09.877    13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2
00:28:09.877   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:28:09.877    13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0
00:28:09.877    13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:28:09.877    13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}'
00:28:09.877    13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:28:09.877    13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1
00:28:09.877   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # ip=192.168.100.8
00:28:09.877   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]]
00:28:09.877   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0
00:28:09.877  6: mlx_0_0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:28:09.877      link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff
00:28:09.877      altname enp217s0f0np0
00:28:09.877      altname ens818f0np0
00:28:09.877      inet 192.168.100.8/24 scope global mlx_0_0
00:28:09.877         valid_lft forever preferred_lft forever
00:28:09.877   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:28:09.877    13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1
00:28:09.877    13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:28:09.877    13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:28:09.877    13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}'
00:28:09.877    13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1
00:28:09.877   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # ip=192.168.100.9
00:28:09.877   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]]
00:28:09.877   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1
00:28:09.877  7: mlx_0_1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:28:09.877      link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff
00:28:09.877      altname enp217s0f1np1
00:28:09.877      altname ens818f1np1
00:28:09.877      inet 192.168.100.9/24 scope global mlx_0_1
00:28:09.877         valid_lft forever preferred_lft forever
00:28:09.877   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0
00:28:09.877   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:28:09.877   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma'
00:28:09.877   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]]
00:28:09.877    13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@484 -- # get_available_rdma_ips
00:28:09.877     13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # get_rdma_if_list
00:28:09.877     13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:28:09.877     13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:28:09.877      13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:28:09.877      13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:28:09.877     13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:28:09.877     13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:28:09.877     13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:28:09.877     13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:28:09.877     13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_0
00:28:09.877     13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2
00:28:09.877     13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:28:09.877     13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:28:09.877     13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:28:09.877     13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:28:09.877     13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:28:09.877     13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_1
00:28:09.877     13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2
00:28:09.877    13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:28:09.877    13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0
00:28:09.877    13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:28:09.877    13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:28:09.877    13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}'
00:28:09.877    13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1
00:28:09.877    13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:28:09.877    13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1
00:28:09.877    13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:28:09.877    13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:28:09.877    13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}'
00:28:09.877    13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1
00:28:09.877   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8
00:28:09.877  192.168.100.9'
00:28:09.877    13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@485 -- # echo '192.168.100.8
00:28:09.877  192.168.100.9'
00:28:09.877    13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@485 -- # head -n 1
00:28:09.877   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8
00:28:09.877    13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # echo '192.168.100.8
00:28:09.877  192.168.100.9'
00:28:09.877    13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # tail -n +2
00:28:09.877    13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # head -n 1
00:28:09.877   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9
00:28:09.877   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']'
00:28:09.877   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024'
00:28:09.877   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']'
00:28:09.877   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']'
00:28:09.877   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-rdma
00:28:10.136   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E
00:28:10.136   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:28:10.136   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable
00:28:10.136   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x
00:28:10.136   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=3431807
00:28:10.136   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 3431807
00:28:10.136   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 3431807 ']'
00:28:10.136   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:28:10.136   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100
00:28:10.136   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:28:10.136  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:28:10.136   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable
00:28:10.136   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x
00:28:10.136   13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E
00:28:10.136  [2024-12-14 13:54:09.719199] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:28:10.136  [2024-12-14 13:54:09.719296] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:28:10.136  [2024-12-14 13:54:09.853631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:28:10.394  [2024-12-14 13:54:09.952119] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:28:10.394  [2024-12-14 13:54:09.952166] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:28:10.394  [2024-12-14 13:54:09.952178] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:28:10.394  [2024-12-14 13:54:09.952191] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:28:10.394  [2024-12-14 13:54:09.952200] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:28:10.394  [2024-12-14 13:54:09.954662] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:28:10.394  [2024-12-14 13:54:09.954734] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:28:10.394  [2024-12-14 13:54:09.954862] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:28:10.394  [2024-12-14 13:54:09.954889] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4
00:28:10.961   13:54:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:28:10.961   13:54:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0
00:28:10.961   13:54:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:28:10.961   13:54:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable
00:28:10.961   13:54:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x
00:28:10.961   13:54:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:28:10.961   13:54:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192
00:28:10.961   13:54:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:10.961   13:54:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x
00:28:10.961  [2024-12-14 13:54:10.611364] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000286c0/0x7fb6aabbd940) succeed.
00:28:10.961  [2024-12-14 13:54:10.621042] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028840/0x7fb6aab79940) succeed.
00:28:11.219   13:54:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:11.219   13:54:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10})
00:28:11.219   13:54:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems
00:28:11.219   13:54:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable
00:28:11.219   13:54:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x
00:28:11.219   13:54:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt
00:28:11.219   13:54:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:28:11.219   13:54:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat
00:28:11.219   13:54:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:28:11.219   13:54:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat
00:28:11.219   13:54:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:28:11.219   13:54:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat
00:28:11.219   13:54:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:28:11.219   13:54:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat
00:28:11.219   13:54:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:28:11.219   13:54:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat
00:28:11.219   13:54:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:28:11.219   13:54:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat
00:28:11.219   13:54:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:28:11.219   13:54:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat
00:28:11.219   13:54:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:28:11.219   13:54:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat
00:28:11.219   13:54:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:28:11.219   13:54:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat
00:28:11.219   13:54:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:28:11.219   13:54:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat
00:28:11.219   13:54:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd
00:28:11.219   13:54:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:11.219   13:54:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x
00:28:11.477  Malloc1
00:28:11.477  [2024-12-14 13:54:11.040058] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 ***
00:28:11.477  Malloc2
00:28:11.477  Malloc3
00:28:11.735  Malloc4
00:28:11.735  Malloc5
00:28:11.735  Malloc6
00:28:11.993  Malloc7
00:28:11.993  Malloc8
00:28:12.251  Malloc9
00:28:12.251  Malloc10
00:28:12.251   13:54:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:12.251   13:54:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems
00:28:12.251   13:54:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable
00:28:12.251   13:54:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x
00:28:12.251   13:54:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=3432131
00:28:12.251   13:54:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' -P 4
00:28:12.251   13:54:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5
00:28:12.509  [2024-12-14 13:54:12.042780] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem.  This behavior is deprecated and will be removed in a future release.
00:28:17.774   13:54:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:28:17.774   13:54:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 3431807
00:28:17.774   13:54:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3431807 ']'
00:28:17.774   13:54:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3431807
00:28:17.774    13:54:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname
00:28:17.774   13:54:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:28:17.774    13:54:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3431807
00:28:17.774   13:54:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:28:17.774   13:54:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:28:17.774   13:54:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3431807'
00:28:17.774  killing process with pid 3431807
00:28:17.774   13:54:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 3431807
00:28:17.774   13:54:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 3431807
00:28:17.774  NVMe io qpair process completion error
00:28:17.774  NVMe io qpair process completion error
00:28:17.774  NVMe io qpair process completion error
00:28:17.774  NVMe io qpair process completion error
00:28:17.774  NVMe io qpair process completion error
00:28:17.774  NVMe io qpair process completion error
00:28:17.774  NVMe io qpair process completion error
00:28:17.774  NVMe io qpair process completion error
00:28:17.774  NVMe io qpair process completion error
00:28:17.774  NVMe io qpair process completion error
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.712  starting I/O failed: -6
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.712  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  [2024-12-14 13:54:18.154078] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Submitting Keep Alive failed
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  starting I/O failed: -6
00:28:18.713  [2024-12-14 13:54:18.180095] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Submitting Keep Alive failed
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  starting I/O failed: -6
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  starting I/O failed: -6
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.713  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  [2024-12-14 13:54:18.207834] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Submitting Keep Alive failed
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.714  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  [2024-12-14 13:54:18.233883] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Submitting Keep Alive failed
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  starting I/O failed: -6
00:28:18.715  [2024-12-14 13:54:18.256390] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Submitting Keep Alive failed
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.715  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  starting I/O failed: -6
00:28:18.716  [2024-12-14 13:54:18.282066] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Submitting Keep Alive failed
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.716  Write completed with error (sct=0, sc=8)
00:28:18.717  [2024-12-14 13:54:18.310242] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Submitting Keep Alive failed
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  starting I/O failed: -6
00:28:18.717  [2024-12-14 13:54:18.335745] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Submitting Keep Alive failed
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.717  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  [2024-12-14 13:54:18.361307] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Submitting Keep Alive failed
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.718  Write completed with error (sct=0, sc=8)
00:28:18.719  Write completed with error (sct=0, sc=8)
00:28:18.719  Write completed with error (sct=0, sc=8)
00:28:18.719  [2024-12-14 13:54:18.389458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0
00:28:18.719  [2024-12-14 13:54:18.389543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state.
00:28:18.719  Initializing NVMe Controllers
00:28:18.719  Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode2
00:28:18.719  Controller IO queue size 128, less than required.
00:28:18.719  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:28:18.719  Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode3
00:28:18.719  Controller IO queue size 128, less than required.
00:28:18.719  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:28:18.719  Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode9
00:28:18.719  Controller IO queue size 128, less than required.
00:28:18.719  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:28:18.719  Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode10
00:28:18.719  Controller IO queue size 128, less than required.
00:28:18.719  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:28:18.719  Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode6
00:28:18.719  Controller IO queue size 128, less than required.
00:28:18.719  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:28:18.719  Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode7
00:28:18.719  Controller IO queue size 128, less than required.
00:28:18.719  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:28:18.719  Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode5
00:28:18.719  Controller IO queue size 128, less than required.
00:28:18.719  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:28:18.719  Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode4
00:28:18.719  Controller IO queue size 128, less than required.
00:28:18.719  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:28:18.719  Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode8
00:28:18.719  Controller IO queue size 128, less than required.
00:28:18.719  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:28:18.719  Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1
00:28:18.719  Controller IO queue size 128, less than required.
00:28:18.719  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:28:18.719  Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0
00:28:18.719  Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0
00:28:18.719  Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0
00:28:18.719  Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0
00:28:18.719  Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0
00:28:18.719  Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0
00:28:18.719  Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0
00:28:18.719  Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0
00:28:18.719  Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0
00:28:18.719  Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0
00:28:18.719  Initialization complete. Launching workers.
00:28:18.719  ========================================================
00:28:18.719                                                                                                                      Latency(us)
00:28:18.719  Device Information                                                              :       IOPS      MiB/s    Average        min        max
00:28:18.719  RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1  from core  0:    1417.37      60.90   90344.28     126.86 1250066.70
00:28:18.719  RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1  from core  0:    1426.52      61.30   89975.20     125.61 1264264.44
00:28:18.719  RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1  from core  0:    1474.84      63.37   87282.96     125.08 1098613.46
00:28:18.719  RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core  0:    1412.45      60.69   91424.19     125.96 1342904.27
00:28:18.719  RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1  from core  0:    1430.76      61.48   90502.34     122.38 1317348.03
00:28:18.719  RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1  from core  0:    1435.68      61.69   90398.03     124.98 1328297.27
00:28:18.719  RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1  from core  0:    1420.08      61.02   91653.66     125.41 1363587.05
00:28:18.719  RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1  from core  0:    1403.47      60.31   93018.12     125.94 1423900.51
00:28:18.719  RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1  from core  0:    1439.92      61.87   90901.73     122.18 1375306.34
00:28:18.719  RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1  from core  0:    1391.77      59.80   94287.36     126.58 1496847.48
00:28:18.719  ========================================================
00:28:18.719  Total                                                                           :   14252.86     612.43   90953.97     122.18 1496847.48
00:28:18.719  
00:28:18.719  [2024-12-14 13:54:18.412983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0
00:28:18.719  [2024-12-14 13:54:18.413018] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state.
00:28:18.719  [2024-12-14 13:54:18.414859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0
00:28:18.719  [2024-12-14 13:54:18.414878] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state.
00:28:18.719  [2024-12-14 13:54:18.417398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 0
00:28:18.719  [2024-12-14 13:54:18.417423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state.
00:28:18.719  [2024-12-14 13:54:18.419478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0
00:28:18.719  [2024-12-14 13:54:18.419505] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state.
00:28:18.719  [2024-12-14 13:54:18.421445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0
00:28:18.719  [2024-12-14 13:54:18.421469] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state.
00:28:18.719  [2024-12-14 13:54:18.423299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0
00:28:18.719  [2024-12-14 13:54:18.423325] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state.
00:28:18.719  [2024-12-14 13:54:18.425202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0
00:28:18.719  [2024-12-14 13:54:18.425229] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state.
00:28:18.719  [2024-12-14 13:54:18.426940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0
00:28:18.719  [2024-12-14 13:54:18.426963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state.
00:28:18.976  [2024-12-14 13:54:18.457780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0
00:28:18.976  [2024-12-14 13:54:18.457808] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state.
00:28:18.976  /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred
00:28:20.878   13:54:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1
00:28:22.257   13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 3432131
00:28:22.257   13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0
00:28:22.257   13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3432131
00:28:22.257   13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait
00:28:22.257   13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:28:22.257    13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait
00:28:22.257   13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:28:22.257   13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 3432131
00:28:22.257   13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1
00:28:22.257   13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:28:22.257   13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:28:22.257   13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:28:22.257   13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget
00:28:22.257   13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state
00:28:22.257   13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf
00:28:22.257   13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt
00:28:22.257   13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini
00:28:22.257   13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup
00:28:22.257   13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync
00:28:22.257   13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']'
00:28:22.257   13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']'
00:28:22.257   13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e
00:28:22.257   13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20}
00:28:22.257   13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma
00:28:22.257  rmmod nvme_rdma
00:28:22.257  rmmod nvme_fabrics
00:28:22.257   13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:28:22.257   13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e
00:28:22.257   13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0
00:28:22.257   13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 3431807 ']'
00:28:22.257   13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 3431807
00:28:22.257   13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3431807 ']'
00:28:22.257   13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3431807
00:28:22.257  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3431807) - No such process
00:28:22.257   13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3431807 is not found'
00:28:22.257  Process with pid 3431807 is not found
00:28:22.257   13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:28:22.257   13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]]
00:28:22.257  
00:28:22.257  real	0m12.323s
00:28:22.257  user	0m46.357s
00:28:22.257  sys	0m1.603s
00:28:22.257   13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable
00:28:22.257   13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x
00:28:22.257  ************************************
00:28:22.257  END TEST nvmf_shutdown_tc4
00:28:22.257  ************************************
00:28:22.257   13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT
00:28:22.257  
00:28:22.257  real	0m52.196s
00:28:22.257  user	2m53.216s
00:28:22.257  sys	0m12.673s
00:28:22.257   13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable
00:28:22.257   13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x
00:28:22.257  ************************************
00:28:22.257  END TEST nvmf_shutdown
00:28:22.257  ************************************
00:28:22.257   13:54:21 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=rdma
00:28:22.257   13:54:21 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:28:22.257   13:54:21 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:28:22.257   13:54:21 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:28:22.257  ************************************
00:28:22.257  START TEST nvmf_nsid
00:28:22.257  ************************************
00:28:22.257   13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=rdma
00:28:22.257  * Looking for test storage...
00:28:22.257  * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target
00:28:22.257    13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:28:22.257     13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version
00:28:22.257     13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:28:22.517    13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:28:22.517    13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:28:22.517    13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l
00:28:22.517    13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l
00:28:22.517    13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-:
00:28:22.517    13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1
00:28:22.517    13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-:
00:28:22.517    13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2
00:28:22.517    13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<'
00:28:22.517    13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2
00:28:22.517    13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1
00:28:22.517    13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:28:22.517    13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in
00:28:22.517    13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1
00:28:22.517    13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 ))
00:28:22.517    13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:28:22.517     13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1
00:28:22.517     13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1
00:28:22.517     13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:28:22.517     13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1
00:28:22.517    13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1
00:28:22.517     13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2
00:28:22.517     13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2
00:28:22.517     13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:28:22.517     13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2
00:28:22.517    13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2
00:28:22.517    13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:28:22.517    13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:28:22.517    13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0
00:28:22.517    13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:28:22.517    13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:28:22.517  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:22.517  		--rc genhtml_branch_coverage=1
00:28:22.517  		--rc genhtml_function_coverage=1
00:28:22.517  		--rc genhtml_legend=1
00:28:22.517  		--rc geninfo_all_blocks=1
00:28:22.517  		--rc geninfo_unexecuted_blocks=1
00:28:22.517  		
00:28:22.517  		'
00:28:22.517    13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:28:22.517  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:22.517  		--rc genhtml_branch_coverage=1
00:28:22.517  		--rc genhtml_function_coverage=1
00:28:22.517  		--rc genhtml_legend=1
00:28:22.517  		--rc geninfo_all_blocks=1
00:28:22.517  		--rc geninfo_unexecuted_blocks=1
00:28:22.517  		
00:28:22.517  		'
00:28:22.517    13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:28:22.517  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:22.517  		--rc genhtml_branch_coverage=1
00:28:22.517  		--rc genhtml_function_coverage=1
00:28:22.517  		--rc genhtml_legend=1
00:28:22.517  		--rc geninfo_all_blocks=1
00:28:22.517  		--rc geninfo_unexecuted_blocks=1
00:28:22.517  		
00:28:22.517  		'
00:28:22.517    13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:28:22.517  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:22.517  		--rc genhtml_branch_coverage=1
00:28:22.517  		--rc genhtml_function_coverage=1
00:28:22.517  		--rc genhtml_legend=1
00:28:22.517  		--rc geninfo_all_blocks=1
00:28:22.517  		--rc geninfo_unexecuted_blocks=1
00:28:22.517  		
00:28:22.517  		'
00:28:22.517   13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh
00:28:22.517     13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s
00:28:22.517    13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:28:22.518    13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:28:22.518    13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:28:22.518    13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:28:22.518    13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:28:22.518    13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:28:22.518    13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:28:22.518    13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:28:22.518    13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:28:22.518     13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:28:22.518    13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:28:22.518    13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e
00:28:22.518    13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:28:22.518    13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:28:22.518    13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:28:22.518    13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:28:22.518    13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh
00:28:22.518     13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob
00:28:22.518     13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:28:22.518     13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:28:22.518     13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:28:22.518      13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:28:22.518      13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:28:22.518      13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:28:22.518      13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH
00:28:22.518      13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:28:22.518    13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0
00:28:22.518    13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:28:22.518    13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:28:22.518    13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:28:22.518    13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:28:22.518    13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:28:22.518    13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:28:22.518  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:28:22.518    13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:28:22.518    13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:28:22.518    13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0
00:28:22.518   13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0
00:28:22.518   13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1
00:28:22.518   13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2
00:28:22.518   13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock
00:28:22.518   13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid=
00:28:22.518   13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit
00:28:22.518   13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z rdma ']'
00:28:22.518   13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:28:22.518   13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs
00:28:22.518   13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no
00:28:22.518   13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns
00:28:22.518   13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:28:22.518   13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:28:22.518    13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:28:22.518   13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:28:22.518   13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:28:22.518   13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable
00:28:22.518   13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x
00:28:29.193   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:28:29.193   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=()
00:28:29.193   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs
00:28:29.193   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=()
00:28:29.193   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:28:29.193   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=()
00:28:29.193   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers
00:28:29.193   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=()
00:28:29.193   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs
00:28:29.193   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=()
00:28:29.193   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810
00:28:29.193   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=()
00:28:29.193   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722
00:28:29.193   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=()
00:28:29.193   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx
00:28:29.193   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:28:29.193   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:28:29.193   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:28:29.193   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:28:29.193   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:28:29.193   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:28:29.193   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:28:29.193   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:28:29.193   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:28:29.193   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:28:29.193   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:28:29.193   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:28:29.193   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:28:29.193   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ rdma == rdma ]]
00:28:29.193   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}")
00:28:29.193   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}")
00:28:29.193   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]]
00:28:29.193   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}")
00:28:29.193   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:28:29.193   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:28:29.193   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)'
00:28:29.193  Found 0000:d9:00.0 (0x15b3 - 0x1015)
00:28:29.193   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:28:29.193   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:28:29.194   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:28:29.194   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:28:29.194   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:28:29.194   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:28:29.194   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:28:29.194   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)'
00:28:29.194  Found 0000:d9:00.1 (0x15b3 - 0x1015)
00:28:29.194   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:28:29.194   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:28:29.194   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:28:29.194   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:28:29.194   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:28:29.194   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:28:29.194   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:28:29.194   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]]
00:28:29.194   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:28:29.194   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:28:29.194   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:28:29.194   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:28:29.194   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:28:29.194   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0'
00:28:29.194  Found net devices under 0000:d9:00.0: mlx_0_0
00:28:29.194   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:28:29.194   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:28:29.194   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:28:29.194   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:28:29.194   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:28:29.194   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:28:29.194   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1'
00:28:29.194  Found net devices under 0000:d9:00.1: mlx_0_1
00:28:29.194   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:28:29.194   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:28:29.194   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes
00:28:29.194   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:28:29.194   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ rdma == tcp ]]
00:28:29.194   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@447 -- # [[ rdma == rdma ]]
00:28:29.194   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@448 -- # rdma_device_init
00:28:29.194   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@529 -- # load_ib_rdma_modules
00:28:29.194    13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@62 -- # uname
00:28:29.194   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']'
00:28:29.194   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@66 -- # modprobe ib_cm
00:28:29.194   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@67 -- # modprobe ib_core
00:28:29.194   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@68 -- # modprobe ib_umad
00:28:29.194   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@69 -- # modprobe ib_uverbs
00:28:29.194   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@70 -- # modprobe iw_cm
00:28:29.194   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@71 -- # modprobe rdma_cm
00:28:29.194   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@72 -- # modprobe rdma_ucm
00:28:29.194   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@530 -- # allocate_nic_ips
00:28:29.194   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR ))
00:28:29.194    13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@77 -- # get_rdma_if_list
00:28:29.194    13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:28:29.194    13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:28:29.194     13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:28:29.194     13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:28:29.194    13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:28:29.194    13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:28:29.194    13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:28:29.194    13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:28:29.194    13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_0
00:28:29.194    13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2
00:28:29.194    13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:28:29.194    13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:28:29.194    13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:28:29.194    13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:28:29.194    13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:28:29.194    13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_1
00:28:29.194    13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2
00:28:29.194   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:28:29.194    13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0
00:28:29.194    13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:28:29.194    13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:28:29.194    13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}'
00:28:29.194    13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1
00:28:29.194   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # ip=192.168.100.8
00:28:29.194   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]]
00:28:29.194   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@85 -- # ip addr show mlx_0_0
00:28:29.194  6: mlx_0_0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:28:29.194      link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff
00:28:29.194      altname enp217s0f0np0
00:28:29.194      altname ens818f0np0
00:28:29.194      inet 192.168.100.8/24 scope global mlx_0_0
00:28:29.194         valid_lft forever preferred_lft forever
00:28:29.194   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:28:29.194    13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1
00:28:29.194    13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:28:29.194    13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:28:29.194    13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}'
00:28:29.194    13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1
00:28:29.194   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # ip=192.168.100.9
00:28:29.194   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]]
00:28:29.194   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@85 -- # ip addr show mlx_0_1
00:28:29.194  7: mlx_0_1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:28:29.194      link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff
00:28:29.194      altname enp217s0f1np1
00:28:29.194      altname ens818f1np1
00:28:29.194      inet 192.168.100.9/24 scope global mlx_0_1
00:28:29.194         valid_lft forever preferred_lft forever
00:28:29.194   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0
00:28:29.194   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:28:29.194   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma'
00:28:29.194   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]]
00:28:29.194    13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@484 -- # get_available_rdma_ips
00:28:29.194     13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@90 -- # get_rdma_if_list
00:28:29.194     13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:28:29.194     13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:28:29.194      13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:28:29.194      13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:28:29.194     13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:28:29.194     13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:28:29.194     13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:28:29.194     13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:28:29.194     13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_0
00:28:29.194     13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2
00:28:29.194     13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:28:29.194     13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:28:29.194     13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:28:29.194     13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:28:29.194     13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:28:29.194     13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_1
00:28:29.194     13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2
00:28:29.194    13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:28:29.195    13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0
00:28:29.195    13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:28:29.195    13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:28:29.195    13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}'
00:28:29.195    13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1
00:28:29.195    13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:28:29.195    13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1
00:28:29.195    13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:28:29.195    13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:28:29.195    13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1
00:28:29.195    13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}'
00:28:29.195   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8
00:28:29.195  192.168.100.9'
00:28:29.195    13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@485 -- # echo '192.168.100.8
00:28:29.195  192.168.100.9'
00:28:29.195    13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@485 -- # head -n 1
00:28:29.195   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8
00:28:29.195    13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # echo '192.168.100.8
00:28:29.195  192.168.100.9'
00:28:29.195    13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # tail -n +2
00:28:29.195    13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # head -n 1
00:28:29.195   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9
00:28:29.195   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']'
00:28:29.195   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024'
00:28:29.195   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' rdma == tcp ']'
00:28:29.195   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' rdma == rdma ']'
00:28:29.195   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-rdma
00:28:29.195   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1
00:28:29.195   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:28:29.195   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable
00:28:29.195   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x
00:28:29.195   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=3437154
00:28:29.195   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1
00:28:29.195   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 3437154
00:28:29.195   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3437154 ']'
00:28:29.195   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:28:29.195   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100
00:28:29.195   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:28:29.195  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:28:29.195   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable
00:28:29.195   13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x
00:28:29.195  [2024-12-14 13:54:28.839373] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:28:29.195  [2024-12-14 13:54:28.839473] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:28:29.454  [2024-12-14 13:54:28.979403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:28:29.454  [2024-12-14 13:54:29.082554] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:28:29.454  [2024-12-14 13:54:29.082597] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:28:29.454  [2024-12-14 13:54:29.082610] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:28:29.454  [2024-12-14 13:54:29.082639] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:28:29.454  [2024-12-14 13:54:29.082649] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:28:29.454  [2024-12-14 13:54:29.084056] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:28:30.023   13:54:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:28:30.023   13:54:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0
00:28:30.023   13:54:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:28:30.023   13:54:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable
00:28:30.023   13:54:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x
00:28:30.023   13:54:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:28:30.023   13:54:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT
00:28:30.023   13:54:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock
00:28:30.023   13:54:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=3437194
00:28:30.023   13:54:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=192.168.100.8
00:28:30.023    13:54:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip
00:28:30.023    13:54:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip
00:28:30.023    13:54:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:30.023    13:54:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:30.023    13:54:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:30.023    13:54:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:30.023    13:54:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:28:30.023    13:54:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:28:30.023    13:54:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:28:30.023    13:54:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:28:30.023    13:54:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:28:30.023   13:54:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=192.168.100.8
00:28:30.023    13:54:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen
00:28:30.023   13:54:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=24a47a77-92c6-42c6-9b29-899156582162
00:28:30.023    13:54:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen
00:28:30.023   13:54:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=0df2dd8e-a7a5-4201-a60d-cb06b56ef0a7
00:28:30.023    13:54:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen
00:28:30.023   13:54:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=43b2ca0e-862b-42f8-a37d-7fe91230bda8
00:28:30.023   13:54:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd
00:28:30.023   13:54:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:30.023   13:54:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x
00:28:30.023  null0
00:28:30.023  null1
00:28:30.023  null2
00:28:30.023  [2024-12-14 13:54:29.748868] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:28:30.023  [2024-12-14 13:54:29.748960] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3437194 ]
00:28:30.023  [2024-12-14 13:54:29.753545] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000292c0/0x7f8bbf5bd940) succeed.
00:28:30.283  [2024-12-14 13:54:29.762814] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000029440/0x7f8bbf579940) succeed.
00:28:30.283  [2024-12-14 13:54:29.866946] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 ***
00:28:30.283  [2024-12-14 13:54:29.883480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:28:30.283   13:54:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:30.283   13:54:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 3437194 /var/tmp/tgt2.sock
00:28:30.283   13:54:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3437194 ']'
00:28:30.283   13:54:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock
00:28:30.283   13:54:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100
00:28:30.283   13:54:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...'
00:28:30.283  Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...
00:28:30.283   13:54:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable
00:28:30.283   13:54:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x
00:28:30.283  [2024-12-14 13:54:29.989254] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:28:31.220   13:54:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:28:31.220   13:54:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0
00:28:31.220   13:54:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock
00:28:31.479  [2024-12-14 13:54:31.101410] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028fc0/0x7fd45f78b940) succeed.
00:28:31.479  [2024-12-14 13:54:31.112752] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000029140/0x7fd45f747940) succeed.
00:28:31.479  [2024-12-14 13:54:31.190481] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 ***
00:28:31.738  nvme0n1 nvme0n2
00:28:31.738  nvme1n1
00:28:31.738    13:54:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect
00:28:31.738    13:54:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr
00:28:31.738    13:54:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t rdma -a 192.168.100.8 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e
00:28:39.857    13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme*
00:28:39.857    13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]]
00:28:39.857    13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]]
00:28:39.857    13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0
00:28:39.857    13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0
00:28:39.857   13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0
00:28:39.857   13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1
00:28:39.857   13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0
00:28:39.857   13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME
00:28:39.857   13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1
00:28:39.857   13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME
00:28:39.857   13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1
00:28:39.857   13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0
00:28:39.857    13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 24a47a77-92c6-42c6-9b29-899156582162
00:28:39.857    13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d -
00:28:39.857    13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1
00:28:39.857    13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid
00:28:39.857     13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json
00:28:39.857     13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid
00:28:39.857    13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=24a47a7792c642c69b29899156582162
00:28:39.857    13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 24A47A7792C642C69B29899156582162
00:28:39.857   13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 24A47A7792C642C69B29899156582162 == \2\4\A\4\7\A\7\7\9\2\C\6\4\2\C\6\9\B\2\9\8\9\9\1\5\6\5\8\2\1\6\2 ]]
00:28:39.858   13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2
00:28:39.858   13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0
00:28:39.858   13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME
00:28:39.858   13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2
00:28:39.858   13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME
00:28:39.858   13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2
00:28:39.858   13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0
00:28:39.858    13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 0df2dd8e-a7a5-4201-a60d-cb06b56ef0a7
00:28:39.858    13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d -
00:28:39.858    13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2
00:28:39.858    13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid
00:28:39.858     13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json
00:28:39.858     13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid
00:28:39.858    13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=0df2dd8ea7a54201a60dcb06b56ef0a7
00:28:39.858    13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 0DF2DD8EA7A54201A60DCB06B56EF0A7
00:28:39.858   13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 0DF2DD8EA7A54201A60DCB06B56EF0A7 == \0\D\F\2\D\D\8\E\A\7\A\5\4\2\0\1\A\6\0\D\C\B\0\6\B\5\6\E\F\0\A\7 ]]
00:28:39.858   13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3
00:28:39.858   13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0
00:28:39.858   13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME
00:28:39.858   13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3
00:28:39.858   13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME
00:28:39.858   13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3
00:28:39.858   13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0
00:28:39.858    13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 43b2ca0e-862b-42f8-a37d-7fe91230bda8
00:28:39.858    13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d -
00:28:39.858    13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3
00:28:39.858    13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid
00:28:39.858     13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json
00:28:39.858     13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid
00:28:39.858    13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=43b2ca0e862b42f8a37d7fe91230bda8
00:28:39.858    13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 43B2CA0E862B42F8A37D7FE91230BDA8
00:28:39.858   13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 43B2CA0E862B42F8A37D7FE91230BDA8 == \4\3\B\2\C\A\0\E\8\6\2\B\4\2\F\8\A\3\7\D\7\F\E\9\1\2\3\0\B\D\A\8 ]]
00:28:39.858   13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0
00:28:46.426   13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT
00:28:46.426   13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup
00:28:46.426   13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 3437194
00:28:46.426   13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3437194 ']'
00:28:46.427   13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3437194
00:28:46.427    13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname
00:28:46.427   13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:28:46.427    13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3437194
00:28:46.427   13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:28:46.427   13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:28:46.427   13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3437194'
00:28:46.427  killing process with pid 3437194
00:28:46.427   13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3437194
00:28:46.427   13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3437194
00:28:48.332   13:54:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini
00:28:48.332   13:54:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup
00:28:48.332   13:54:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync
00:28:48.332   13:54:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' rdma == tcp ']'
00:28:48.332   13:54:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' rdma == rdma ']'
00:28:48.332   13:54:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e
00:28:48.332   13:54:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20}
00:28:48.332   13:54:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma
00:28:48.332  rmmod nvme_rdma
00:28:48.332  rmmod nvme_fabrics
00:28:48.332   13:54:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:28:48.332   13:54:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e
00:28:48.332   13:54:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0
00:28:48.332   13:54:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 3437154 ']'
00:28:48.332   13:54:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 3437154
00:28:48.332   13:54:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3437154 ']'
00:28:48.332   13:54:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3437154
00:28:48.332    13:54:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname
00:28:48.332   13:54:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:28:48.332    13:54:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3437154
00:28:48.332   13:54:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:28:48.332   13:54:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:28:48.332   13:54:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3437154'
00:28:48.332  killing process with pid 3437154
00:28:48.332   13:54:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3437154
00:28:48.332   13:54:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3437154
00:28:49.712   13:54:49 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:28:49.712   13:54:49 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]]
00:28:49.712  
00:28:49.712  real	0m27.267s
00:28:49.712  user	0m39.823s
00:28:49.712  sys	0m6.660s
00:28:49.712   13:54:49 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable
00:28:49.712   13:54:49 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x
00:28:49.712  ************************************
00:28:49.712  END TEST nvmf_nsid
00:28:49.712  ************************************
00:28:49.712   13:54:49 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT
00:28:49.712  
00:28:49.712  real	16m59.565s
00:28:49.712  user	51m35.453s
00:28:49.712  sys	3m21.409s
00:28:49.712   13:54:49 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable
00:28:49.712   13:54:49 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:28:49.712  ************************************
00:28:49.712  END TEST nvmf_target_extra
00:28:49.712  ************************************
00:28:49.712   13:54:49 nvmf_rdma -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma
00:28:49.712   13:54:49 nvmf_rdma -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:28:49.712   13:54:49 nvmf_rdma -- common/autotest_common.sh@1111 -- # xtrace_disable
00:28:49.712   13:54:49 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x
00:28:49.712  ************************************
00:28:49.712  START TEST nvmf_host
00:28:49.712  ************************************
00:28:49.712   13:54:49 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma
00:28:49.712  * Looking for test storage...
00:28:49.712  * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf
00:28:49.712    13:54:49 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:28:49.712     13:54:49 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version
00:28:49.712     13:54:49 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:28:49.712    13:54:49 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:28:49.712    13:54:49 nvmf_rdma.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:28:49.712    13:54:49 nvmf_rdma.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l
00:28:49.712    13:54:49 nvmf_rdma.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l
00:28:49.712    13:54:49 nvmf_rdma.nvmf_host -- scripts/common.sh@336 -- # IFS=.-:
00:28:49.712    13:54:49 nvmf_rdma.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1
00:28:49.712    13:54:49 nvmf_rdma.nvmf_host -- scripts/common.sh@337 -- # IFS=.-:
00:28:49.712    13:54:49 nvmf_rdma.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2
00:28:49.712    13:54:49 nvmf_rdma.nvmf_host -- scripts/common.sh@338 -- # local 'op=<'
00:28:49.712    13:54:49 nvmf_rdma.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2
00:28:49.712    13:54:49 nvmf_rdma.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1
00:28:49.712    13:54:49 nvmf_rdma.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:28:49.712    13:54:49 nvmf_rdma.nvmf_host -- scripts/common.sh@344 -- # case "$op" in
00:28:49.712    13:54:49 nvmf_rdma.nvmf_host -- scripts/common.sh@345 -- # : 1
00:28:49.712    13:54:49 nvmf_rdma.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 ))
00:28:49.712    13:54:49 nvmf_rdma.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:28:49.712     13:54:49 nvmf_rdma.nvmf_host -- scripts/common.sh@365 -- # decimal 1
00:28:49.712     13:54:49 nvmf_rdma.nvmf_host -- scripts/common.sh@353 -- # local d=1
00:28:49.712     13:54:49 nvmf_rdma.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:28:49.712     13:54:49 nvmf_rdma.nvmf_host -- scripts/common.sh@355 -- # echo 1
00:28:49.712    13:54:49 nvmf_rdma.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1
00:28:49.712     13:54:49 nvmf_rdma.nvmf_host -- scripts/common.sh@366 -- # decimal 2
00:28:49.712     13:54:49 nvmf_rdma.nvmf_host -- scripts/common.sh@353 -- # local d=2
00:28:49.712     13:54:49 nvmf_rdma.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:28:49.712     13:54:49 nvmf_rdma.nvmf_host -- scripts/common.sh@355 -- # echo 2
00:28:49.712    13:54:49 nvmf_rdma.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2
00:28:49.712    13:54:49 nvmf_rdma.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:28:49.712    13:54:49 nvmf_rdma.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:28:49.712    13:54:49 nvmf_rdma.nvmf_host -- scripts/common.sh@368 -- # return 0
00:28:49.712    13:54:49 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:28:49.712    13:54:49 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:28:49.712  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:49.712  		--rc genhtml_branch_coverage=1
00:28:49.712  		--rc genhtml_function_coverage=1
00:28:49.712  		--rc genhtml_legend=1
00:28:49.712  		--rc geninfo_all_blocks=1
00:28:49.712  		--rc geninfo_unexecuted_blocks=1
00:28:49.712  		
00:28:49.712  		'
00:28:49.712    13:54:49 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:28:49.712  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:49.712  		--rc genhtml_branch_coverage=1
00:28:49.712  		--rc genhtml_function_coverage=1
00:28:49.712  		--rc genhtml_legend=1
00:28:49.712  		--rc geninfo_all_blocks=1
00:28:49.712  		--rc geninfo_unexecuted_blocks=1
00:28:49.712  		
00:28:49.712  		'
00:28:49.712    13:54:49 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:28:49.712  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:49.712  		--rc genhtml_branch_coverage=1
00:28:49.712  		--rc genhtml_function_coverage=1
00:28:49.712  		--rc genhtml_legend=1
00:28:49.712  		--rc geninfo_all_blocks=1
00:28:49.712  		--rc geninfo_unexecuted_blocks=1
00:28:49.712  		
00:28:49.712  		'
00:28:49.712    13:54:49 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:28:49.712  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:49.712  		--rc genhtml_branch_coverage=1
00:28:49.712  		--rc genhtml_function_coverage=1
00:28:49.712  		--rc genhtml_legend=1
00:28:49.712  		--rc geninfo_all_blocks=1
00:28:49.712  		--rc geninfo_unexecuted_blocks=1
00:28:49.712  		
00:28:49.712  		'
00:28:49.712   13:54:49 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh
00:28:49.712     13:54:49 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # uname -s
00:28:49.712    13:54:49 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:28:49.712    13:54:49 nvmf_rdma.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:28:49.712    13:54:49 nvmf_rdma.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:28:49.712    13:54:49 nvmf_rdma.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:28:49.712    13:54:49 nvmf_rdma.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:28:49.712    13:54:49 nvmf_rdma.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:28:49.712    13:54:49 nvmf_rdma.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:28:49.712    13:54:49 nvmf_rdma.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:28:49.712    13:54:49 nvmf_rdma.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:28:49.712     13:54:49 nvmf_rdma.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:28:49.712    13:54:49 nvmf_rdma.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:28:49.712    13:54:49 nvmf_rdma.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e
00:28:49.712    13:54:49 nvmf_rdma.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:28:49.712    13:54:49 nvmf_rdma.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:28:49.712    13:54:49 nvmf_rdma.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:28:49.712    13:54:49 nvmf_rdma.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:28:49.712    13:54:49 nvmf_rdma.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh
00:28:49.712     13:54:49 nvmf_rdma.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob
00:28:49.712     13:54:49 nvmf_rdma.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:28:49.712     13:54:49 nvmf_rdma.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:28:49.712     13:54:49 nvmf_rdma.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:28:49.713      13:54:49 nvmf_rdma.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:28:49.713      13:54:49 nvmf_rdma.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:28:49.713      13:54:49 nvmf_rdma.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:28:49.713      13:54:49 nvmf_rdma.nvmf_host -- paths/export.sh@5 -- # export PATH
00:28:49.713      13:54:49 nvmf_rdma.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:28:49.713    13:54:49 nvmf_rdma.nvmf_host -- nvmf/common.sh@51 -- # : 0
00:28:49.713    13:54:49 nvmf_rdma.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:28:49.713    13:54:49 nvmf_rdma.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:28:49.713    13:54:49 nvmf_rdma.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:28:49.713    13:54:49 nvmf_rdma.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:28:49.713    13:54:49 nvmf_rdma.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:28:49.713    13:54:49 nvmf_rdma.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:28:49.713  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:28:49.713    13:54:49 nvmf_rdma.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:28:49.713    13:54:49 nvmf_rdma.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:28:49.713    13:54:49 nvmf_rdma.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0
00:28:49.713   13:54:49 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT
00:28:49.713   13:54:49 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@")
00:28:49.713   13:54:49 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]]
00:28:49.713   13:54:49 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma
00:28:49.713   13:54:49 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:28:49.713   13:54:49 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable
00:28:49.713   13:54:49 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:28:49.972  ************************************
00:28:49.972  START TEST nvmf_multicontroller
00:28:49.972  ************************************
00:28:49.972   13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma
00:28:49.972  * Looking for test storage...
00:28:49.972  * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host
00:28:49.972    13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:28:49.972     13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version
00:28:49.972     13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:28:49.972    13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:28:49.972    13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:28:49.972    13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l
00:28:49.972    13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l
00:28:49.972    13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-:
00:28:49.972    13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1
00:28:49.972    13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-:
00:28:49.972    13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2
00:28:49.972    13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<'
00:28:49.972    13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2
00:28:49.972    13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1
00:28:49.972    13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:28:49.972    13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in
00:28:49.972    13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1
00:28:49.972    13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 ))
00:28:49.972    13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:28:49.972     13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1
00:28:49.972     13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1
00:28:49.972     13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:28:49.972     13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1
00:28:49.972    13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1
00:28:49.972     13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2
00:28:49.972     13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2
00:28:49.972     13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:28:49.972     13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2
00:28:49.972    13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2
00:28:49.972    13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:28:49.972    13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:28:49.972    13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0
00:28:49.972    13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:28:49.972    13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:28:49.972  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:49.972  		--rc genhtml_branch_coverage=1
00:28:49.972  		--rc genhtml_function_coverage=1
00:28:49.972  		--rc genhtml_legend=1
00:28:49.972  		--rc geninfo_all_blocks=1
00:28:49.973  		--rc geninfo_unexecuted_blocks=1
00:28:49.973  		
00:28:49.973  		'
00:28:49.973    13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:28:49.973  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:49.973  		--rc genhtml_branch_coverage=1
00:28:49.973  		--rc genhtml_function_coverage=1
00:28:49.973  		--rc genhtml_legend=1
00:28:49.973  		--rc geninfo_all_blocks=1
00:28:49.973  		--rc geninfo_unexecuted_blocks=1
00:28:49.973  		
00:28:49.973  		'
00:28:49.973    13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:28:49.973  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:49.973  		--rc genhtml_branch_coverage=1
00:28:49.973  		--rc genhtml_function_coverage=1
00:28:49.973  		--rc genhtml_legend=1
00:28:49.973  		--rc geninfo_all_blocks=1
00:28:49.973  		--rc geninfo_unexecuted_blocks=1
00:28:49.973  		
00:28:49.973  		'
00:28:49.973    13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:28:49.973  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:49.973  		--rc genhtml_branch_coverage=1
00:28:49.973  		--rc genhtml_function_coverage=1
00:28:49.973  		--rc genhtml_legend=1
00:28:49.973  		--rc geninfo_all_blocks=1
00:28:49.973  		--rc geninfo_unexecuted_blocks=1
00:28:49.973  		
00:28:49.973  		'
00:28:49.973   13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh
00:28:49.973     13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s
00:28:49.973    13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:28:49.973    13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:28:49.973    13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:28:49.973    13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:28:49.973    13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:28:49.973    13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:28:49.973    13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:28:49.973    13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:28:49.973    13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:28:49.973     13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:28:49.973    13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:28:49.973    13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e
00:28:49.973    13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:28:49.973    13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:28:49.973    13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:28:49.973    13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:28:49.973    13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh
00:28:49.973     13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob
00:28:49.973     13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:28:49.973     13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:28:49.973     13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:28:49.973      13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:28:49.973      13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:28:49.973      13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:28:49.973      13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH
00:28:49.973      13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:28:49.973    13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0
00:28:49.973    13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:28:49.973    13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:28:49.973    13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:28:49.973    13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:28:49.973    13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:28:49.973    13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:28:49.973  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:28:49.973    13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:28:49.973    13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:28:49.973    13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0
00:28:49.973   13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64
00:28:49.973   13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:28:49.973   13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000
00:28:49.973   13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001
00:28:49.973   13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock
00:28:49.973   13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']'
00:28:49.973   13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.'
00:28:49.973  Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.
00:28:49.973   13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@20 -- # exit 0
00:28:49.973  
00:28:49.973  real	0m0.219s
00:28:49.973  user	0m0.134s
00:28:49.973  sys	0m0.100s
00:28:49.973   13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable
00:28:49.973   13:54:49 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:28:49.973  ************************************
00:28:49.973  END TEST nvmf_multicontroller
00:28:49.973  ************************************
00:28:50.232   13:54:49 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma
00:28:50.232   13:54:49 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:28:50.232   13:54:49 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable
00:28:50.232   13:54:49 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:28:50.232  ************************************
00:28:50.232  START TEST nvmf_aer
00:28:50.232  ************************************
00:28:50.232   13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma
00:28:50.232  * Looking for test storage...
00:28:50.232  * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host
00:28:50.232    13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:28:50.232     13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version
00:28:50.232     13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:28:50.232    13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:28:50.232    13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:28:50.232    13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l
00:28:50.232    13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l
00:28:50.232    13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-:
00:28:50.232    13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1
00:28:50.232    13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-:
00:28:50.232    13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2
00:28:50.232    13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<'
00:28:50.232    13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2
00:28:50.232    13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1
00:28:50.232    13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:28:50.232    13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in
00:28:50.232    13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1
00:28:50.232    13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 ))
00:28:50.232    13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:28:50.232     13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1
00:28:50.232     13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1
00:28:50.232     13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:28:50.232     13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1
00:28:50.232    13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1
00:28:50.232     13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2
00:28:50.232     13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2
00:28:50.232     13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:28:50.232     13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2
00:28:50.232    13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2
00:28:50.232    13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:28:50.232    13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:28:50.232    13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0
00:28:50.232    13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:28:50.232    13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:28:50.232  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:50.232  		--rc genhtml_branch_coverage=1
00:28:50.232  		--rc genhtml_function_coverage=1
00:28:50.232  		--rc genhtml_legend=1
00:28:50.232  		--rc geninfo_all_blocks=1
00:28:50.232  		--rc geninfo_unexecuted_blocks=1
00:28:50.232  		
00:28:50.232  		'
00:28:50.232    13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:28:50.232  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:50.232  		--rc genhtml_branch_coverage=1
00:28:50.232  		--rc genhtml_function_coverage=1
00:28:50.232  		--rc genhtml_legend=1
00:28:50.232  		--rc geninfo_all_blocks=1
00:28:50.232  		--rc geninfo_unexecuted_blocks=1
00:28:50.232  		
00:28:50.232  		'
00:28:50.232    13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:28:50.232  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:50.232  		--rc genhtml_branch_coverage=1
00:28:50.232  		--rc genhtml_function_coverage=1
00:28:50.232  		--rc genhtml_legend=1
00:28:50.232  		--rc geninfo_all_blocks=1
00:28:50.232  		--rc geninfo_unexecuted_blocks=1
00:28:50.232  		
00:28:50.232  		'
00:28:50.232    13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:28:50.232  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:50.232  		--rc genhtml_branch_coverage=1
00:28:50.232  		--rc genhtml_function_coverage=1
00:28:50.232  		--rc genhtml_legend=1
00:28:50.232  		--rc geninfo_all_blocks=1
00:28:50.232  		--rc geninfo_unexecuted_blocks=1
00:28:50.232  		
00:28:50.232  		'
00:28:50.232   13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh
00:28:50.232     13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s
00:28:50.232    13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:28:50.232    13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:28:50.232    13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:28:50.232    13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:28:50.232    13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:28:50.232    13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:28:50.232    13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:28:50.232    13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:28:50.232    13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:28:50.232     13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:28:50.232    13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:28:50.232    13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e
00:28:50.232    13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:28:50.232    13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:28:50.232    13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:28:50.232    13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:28:50.232    13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh
00:28:50.232     13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob
00:28:50.232     13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:28:50.232     13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:28:50.232     13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:28:50.232      13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:28:50.232      13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:28:50.232      13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:28:50.232      13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH
00:28:50.232      13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:28:50.232    13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0
00:28:50.232    13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:28:50.232    13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:28:50.232    13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:28:50.232    13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:28:50.232    13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:28:50.232    13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:28:50.232  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:28:50.232    13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:28:50.232    13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:28:50.491    13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0
00:28:50.491   13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit
00:28:50.491   13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z rdma ']'
00:28:50.491   13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:28:50.491   13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs
00:28:50.491   13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no
00:28:50.491   13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns
00:28:50.491   13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:28:50.491   13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:28:50.491    13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:28:50.491   13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:28:50.491   13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:28:50.491   13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable
00:28:50.491   13:54:49 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x
00:28:57.052   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:28:57.052   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=()
00:28:57.052   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs
00:28:57.052   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=()
00:28:57.052   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:28:57.052   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=()
00:28:57.052   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers
00:28:57.052   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=()
00:28:57.052   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs
00:28:57.052   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=()
00:28:57.052   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810
00:28:57.052   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=()
00:28:57.052   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722
00:28:57.052   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=()
00:28:57.052   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx
00:28:57.052   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:28:57.052   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:28:57.052   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:28:57.052   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:28:57.052   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:28:57.052   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:28:57.052   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:28:57.052   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ rdma == rdma ]]
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}")
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}")
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]]
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}")
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)'
00:28:57.053  Found 0000:d9:00.0 (0x15b3 - 0x1015)
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)'
00:28:57.053  Found 0000:d9:00.1 (0x15b3 - 0x1015)
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]]
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0'
00:28:57.053  Found net devices under 0000:d9:00.0: mlx_0_0
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1'
00:28:57.053  Found net devices under 0000:d9:00.1: mlx_0_1
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ rdma == tcp ]]
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@447 -- # [[ rdma == rdma ]]
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # rdma_device_init
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@529 -- # load_ib_rdma_modules
00:28:57.053    13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@62 -- # uname
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']'
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@66 -- # modprobe ib_cm
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@67 -- # modprobe ib_core
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@68 -- # modprobe ib_umad
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@69 -- # modprobe ib_uverbs
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@70 -- # modprobe iw_cm
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@71 -- # modprobe rdma_cm
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@72 -- # modprobe rdma_ucm
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@530 -- # allocate_nic_ips
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR ))
00:28:57.053    13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # get_rdma_if_list
00:28:57.053    13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:28:57.053    13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:28:57.053     13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:28:57.053     13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:28:57.053    13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:28:57.053    13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:28:57.053    13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:28:57.053    13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:28:57.053    13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_0
00:28:57.053    13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2
00:28:57.053    13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:28:57.053    13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:28:57.053    13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:28:57.053    13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:28:57.053    13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:28:57.053    13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_1
00:28:57.053    13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:28:57.053    13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0
00:28:57.053    13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:28:57.053    13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:28:57.053    13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}'
00:28:57.053    13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # ip=192.168.100.8
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]]
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@85 -- # ip addr show mlx_0_0
00:28:57.053  6: mlx_0_0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:28:57.053      link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff
00:28:57.053      altname enp217s0f0np0
00:28:57.053      altname ens818f0np0
00:28:57.053      inet 192.168.100.8/24 scope global mlx_0_0
00:28:57.053         valid_lft forever preferred_lft forever
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:28:57.053    13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1
00:28:57.053    13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:28:57.053    13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:28:57.053    13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}'
00:28:57.053    13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # ip=192.168.100.9
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]]
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@85 -- # ip addr show mlx_0_1
00:28:57.053  7: mlx_0_1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:28:57.053      link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff
00:28:57.053      altname enp217s0f1np1
00:28:57.053      altname ens818f1np1
00:28:57.053      inet 192.168.100.9/24 scope global mlx_0_1
00:28:57.053         valid_lft forever preferred_lft forever
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma'
00:28:57.053   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]]
00:28:57.053    13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # get_available_rdma_ips
00:28:57.053     13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # get_rdma_if_list
00:28:57.053     13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:28:57.054     13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:28:57.054      13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:28:57.054      13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:28:57.312     13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:28:57.312     13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:28:57.312     13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:28:57.312     13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:28:57.312     13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_0
00:28:57.312     13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2
00:28:57.312     13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:28:57.313     13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:28:57.313     13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:28:57.313     13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:28:57.313     13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:28:57.313     13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_1
00:28:57.313     13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2
00:28:57.313    13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:28:57.313    13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0
00:28:57.313    13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:28:57.313    13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:28:57.313    13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}'
00:28:57.313    13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1
00:28:57.313    13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:28:57.313    13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1
00:28:57.313    13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:28:57.313    13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:28:57.313    13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}'
00:28:57.313    13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1
00:28:57.313   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8
00:28:57.313  192.168.100.9'
00:28:57.313    13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # echo '192.168.100.8
00:28:57.313  192.168.100.9'
00:28:57.313    13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # head -n 1
00:28:57.313   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8
00:28:57.313    13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # echo '192.168.100.8
00:28:57.313  192.168.100.9'
00:28:57.313    13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # tail -n +2
00:28:57.313    13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # head -n 1
00:28:57.313   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9
00:28:57.313   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']'
00:28:57.313   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024'
00:28:57.313   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' rdma == tcp ']'
00:28:57.313   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' rdma == rdma ']'
00:28:57.313   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-rdma
00:28:57.313   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF
00:28:57.313   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:28:57.313   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable
00:28:57.313   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x
00:28:57.313   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=3443989
00:28:57.313   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 3443989
00:28:57.313   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 3443989 ']'
00:28:57.313   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:28:57.313   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100
00:28:57.313   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:28:57.313  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:28:57.313   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable
00:28:57.313   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x
00:28:57.313   13:54:56 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:28:57.313  [2024-12-14 13:54:56.959849] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:28:57.313  [2024-12-14 13:54:56.959952] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:28:57.571  [2024-12-14 13:54:57.093288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:28:57.571  [2024-12-14 13:54:57.204311] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:28:57.571  [2024-12-14 13:54:57.204356] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:28:57.571  [2024-12-14 13:54:57.204369] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:28:57.571  [2024-12-14 13:54:57.204382] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:28:57.571  [2024-12-14 13:54:57.204392] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:28:57.571  [2024-12-14 13:54:57.206907] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:28:57.571  [2024-12-14 13:54:57.206989] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:28:57.572  [2024-12-14 13:54:57.207057] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:28:57.572  [2024-12-14 13:54:57.207065] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:28:58.138   13:54:57 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:28:58.138   13:54:57 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0
00:28:58.138   13:54:57 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:28:58.138   13:54:57 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable
00:28:58.138   13:54:57 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x
00:28:58.138   13:54:57 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:28:58.138   13:54:57 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192
00:28:58.138   13:54:57 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:58.138   13:54:57 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x
00:28:58.138  [2024-12-14 13:54:57.849230] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028540/0x7f5f6cd3e940) succeed.
00:28:58.138  [2024-12-14 13:54:57.858672] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000286c0/0x7f5f6c3bd940) succeed.
00:28:58.397   13:54:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:58.397   13:54:58 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0
00:28:58.397   13:54:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:58.397   13:54:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x
00:28:58.655  Malloc0
00:28:58.655   13:54:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:58.655   13:54:58 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2
00:28:58.655   13:54:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:58.655   13:54:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x
00:28:58.655   13:54:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:58.655   13:54:58 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:28:58.655   13:54:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:58.655   13:54:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x
00:28:58.655   13:54:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:58.655   13:54:58 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420
00:28:58.655   13:54:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:58.655   13:54:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x
00:28:58.655  [2024-12-14 13:54:58.203485] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 ***
00:28:58.655   13:54:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:58.655   13:54:58 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems
00:28:58.655   13:54:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:58.655   13:54:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x
00:28:58.655  [
00:28:58.655  {
00:28:58.655  "nqn": "nqn.2014-08.org.nvmexpress.discovery",
00:28:58.655  "subtype": "Discovery",
00:28:58.655  "listen_addresses": [],
00:28:58.655  "allow_any_host": true,
00:28:58.655  "hosts": []
00:28:58.655  },
00:28:58.655  {
00:28:58.655  "nqn": "nqn.2016-06.io.spdk:cnode1",
00:28:58.655  "subtype": "NVMe",
00:28:58.655  "listen_addresses": [
00:28:58.655  {
00:28:58.655  "trtype": "RDMA",
00:28:58.655  "adrfam": "IPv4",
00:28:58.655  "traddr": "192.168.100.8",
00:28:58.655  "trsvcid": "4420"
00:28:58.655  }
00:28:58.655  ],
00:28:58.655  "allow_any_host": true,
00:28:58.655  "hosts": [],
00:28:58.655  "serial_number": "SPDK00000000000001",
00:28:58.655  "model_number": "SPDK bdev Controller",
00:28:58.655  "max_namespaces": 2,
00:28:58.655  "min_cntlid": 1,
00:28:58.655  "max_cntlid": 65519,
00:28:58.655  "namespaces": [
00:28:58.655  {
00:28:58.655  "nsid": 1,
00:28:58.655  "bdev_name": "Malloc0",
00:28:58.655  "name": "Malloc0",
00:28:58.655  "nguid": "36F8797559AE4EC68047A1CFE80CC063",
00:28:58.655  "uuid": "36f87975-59ae-4ec6-8047-a1cfe80cc063"
00:28:58.655  }
00:28:58.655  ]
00:28:58.655  }
00:28:58.655  ]
00:28:58.655   13:54:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:58.655   13:54:58 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file
00:28:58.655   13:54:58 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file
00:28:58.655   13:54:58 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3444269
00:28:58.655   13:54:58 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/aer/aer -r '        trtype:rdma         adrfam:IPv4         traddr:192.168.100.8         trsvcid:4420         subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file
00:28:58.655   13:54:58 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file
00:28:58.655   13:54:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0
00:28:58.655   13:54:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']'
00:28:58.655   13:54:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']'
00:28:58.655   13:54:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1
00:28:58.655   13:54:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1
00:28:58.655   13:54:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']'
00:28:58.655   13:54:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']'
00:28:58.655   13:54:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2
00:28:58.655   13:54:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1
00:28:58.914   13:54:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']'
00:28:58.914   13:54:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']'
00:28:58.914   13:54:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3
00:28:58.914   13:54:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1
00:28:58.914   13:54:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']'
00:28:58.914   13:54:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']'
00:28:58.914   13:54:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0
00:28:58.914   13:54:58 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1
00:28:58.914   13:54:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:58.914   13:54:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x
00:28:59.173  Malloc1
00:28:59.173   13:54:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:59.173   13:54:58 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2
00:28:59.173   13:54:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:59.173   13:54:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x
00:28:59.173   13:54:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:59.173   13:54:58 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems
00:28:59.173   13:54:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:59.173   13:54:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x
00:28:59.173  [
00:28:59.173  {
00:28:59.173  "nqn": "nqn.2014-08.org.nvmexpress.discovery",
00:28:59.173  "subtype": "Discovery",
00:28:59.173  "listen_addresses": [],
00:28:59.173  "allow_any_host": true,
00:28:59.173  "hosts": []
00:28:59.173  },
00:28:59.173  {
00:28:59.173  "nqn": "nqn.2016-06.io.spdk:cnode1",
00:28:59.173  "subtype": "NVMe",
00:28:59.173  "listen_addresses": [
00:28:59.173  {
00:28:59.173  "trtype": "RDMA",
00:28:59.173  "adrfam": "IPv4",
00:28:59.173  "traddr": "192.168.100.8",
00:28:59.173  "trsvcid": "4420"
00:28:59.173  }
00:28:59.173  ],
00:28:59.173  "allow_any_host": true,
00:28:59.173  "hosts": [],
00:28:59.173  "serial_number": "SPDK00000000000001",
00:28:59.173  "model_number": "SPDK bdev Controller",
00:28:59.173  "max_namespaces": 2,
00:28:59.173  "min_cntlid": 1,
00:28:59.173  "max_cntlid": 65519,
00:28:59.173  "namespaces": [
00:28:59.173  {
00:28:59.173  "nsid": 1,
00:28:59.173  "bdev_name": "Malloc0",
00:28:59.173  "name": "Malloc0",
00:28:59.173  "nguid": "36F8797559AE4EC68047A1CFE80CC063",
00:28:59.173  "uuid": "36f87975-59ae-4ec6-8047-a1cfe80cc063"
00:28:59.173  },
00:28:59.173  {
00:28:59.173  "nsid": 2,
00:28:59.173  "bdev_name": "Malloc1",
00:28:59.173  "name": "Malloc1",
00:28:59.173  "nguid": "984C631D20704CF4B8964E9834AABDCB",
00:28:59.173  "uuid": "984c631d-2070-4cf4-b896-4e9834aabdcb"
00:28:59.173  }
00:28:59.173  ]
00:28:59.173  }
00:28:59.173  ]
00:28:59.173   13:54:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:59.173   13:54:58 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3444269
00:28:59.173  Asynchronous Event Request test
00:28:59.173  Attaching to 192.168.100.8
00:28:59.173  Attached to 192.168.100.8
00:28:59.173  Registering asynchronous event callbacks...
00:28:59.173  Starting namespace attribute notice tests for all controllers...
00:28:59.173  192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00
00:28:59.173  aer_cb - Changed Namespace
00:28:59.173  Cleaning up...
00:28:59.173   13:54:58 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0
00:28:59.173   13:54:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:59.173   13:54:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x
00:28:59.431   13:54:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:59.431   13:54:59 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1
00:28:59.431   13:54:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:59.431   13:54:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x
00:28:59.690   13:54:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:59.690   13:54:59 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:28:59.690   13:54:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:59.690   13:54:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x
00:28:59.690   13:54:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:59.690   13:54:59 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT
00:28:59.690   13:54:59 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini
00:28:59.690   13:54:59 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup
00:28:59.690   13:54:59 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync
00:28:59.690   13:54:59 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' rdma == tcp ']'
00:28:59.690   13:54:59 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' rdma == rdma ']'
00:28:59.690   13:54:59 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e
00:28:59.690   13:54:59 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20}
00:28:59.690   13:54:59 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma
00:28:59.690  rmmod nvme_rdma
00:28:59.690  rmmod nvme_fabrics
00:28:59.690   13:54:59 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:28:59.690   13:54:59 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e
00:28:59.690   13:54:59 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0
00:28:59.690   13:54:59 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 3443989 ']'
00:28:59.690   13:54:59 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 3443989
00:28:59.690   13:54:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 3443989 ']'
00:28:59.690   13:54:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 3443989
00:28:59.690    13:54:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname
00:28:59.690   13:54:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:28:59.690    13:54:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3443989
00:28:59.690   13:54:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:28:59.690   13:54:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:28:59.690   13:54:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3443989'
00:28:59.690  killing process with pid 3443989
00:28:59.690   13:54:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 3443989
00:28:59.690   13:54:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 3443989
00:29:01.594   13:55:00 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:29:01.594   13:55:00 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]]
00:29:01.594  
00:29:01.594  real	0m11.199s
00:29:01.594  user	0m15.114s
00:29:01.594  sys	0m6.145s
00:29:01.594   13:55:00 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable
00:29:01.594   13:55:00 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x
00:29:01.594  ************************************
00:29:01.594  END TEST nvmf_aer
00:29:01.594  ************************************
00:29:01.594   13:55:00 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma
00:29:01.594   13:55:00 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:29:01.594   13:55:00 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable
00:29:01.594   13:55:00 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:29:01.594  ************************************
00:29:01.594  START TEST nvmf_async_init
00:29:01.594  ************************************
00:29:01.594   13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma
00:29:01.594  * Looking for test storage...
00:29:01.594  * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host
00:29:01.594    13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:29:01.594     13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version
00:29:01.594     13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:29:01.594    13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:29:01.594    13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:29:01.594    13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l
00:29:01.594    13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l
00:29:01.594    13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-:
00:29:01.594    13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1
00:29:01.594    13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-:
00:29:01.594    13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2
00:29:01.594    13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<'
00:29:01.594    13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2
00:29:01.594    13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1
00:29:01.594    13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:29:01.594    13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in
00:29:01.594    13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1
00:29:01.594    13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 ))
00:29:01.594    13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:29:01.594     13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1
00:29:01.594     13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1
00:29:01.594     13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:29:01.594     13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1
00:29:01.594    13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1
00:29:01.594     13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2
00:29:01.594     13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2
00:29:01.594     13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:29:01.594     13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2
00:29:01.594    13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2
00:29:01.594    13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:29:01.594    13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:29:01.594    13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0
00:29:01.594    13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:29:01.594    13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:29:01.594  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:29:01.594  		--rc genhtml_branch_coverage=1
00:29:01.594  		--rc genhtml_function_coverage=1
00:29:01.594  		--rc genhtml_legend=1
00:29:01.594  		--rc geninfo_all_blocks=1
00:29:01.594  		--rc geninfo_unexecuted_blocks=1
00:29:01.594  		
00:29:01.594  		'
00:29:01.594    13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:29:01.594  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:29:01.594  		--rc genhtml_branch_coverage=1
00:29:01.594  		--rc genhtml_function_coverage=1
00:29:01.594  		--rc genhtml_legend=1
00:29:01.594  		--rc geninfo_all_blocks=1
00:29:01.595  		--rc geninfo_unexecuted_blocks=1
00:29:01.595  		
00:29:01.595  		'
00:29:01.595    13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:29:01.595  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:29:01.595  		--rc genhtml_branch_coverage=1
00:29:01.595  		--rc genhtml_function_coverage=1
00:29:01.595  		--rc genhtml_legend=1
00:29:01.595  		--rc geninfo_all_blocks=1
00:29:01.595  		--rc geninfo_unexecuted_blocks=1
00:29:01.595  		
00:29:01.595  		'
00:29:01.595    13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:29:01.595  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:29:01.595  		--rc genhtml_branch_coverage=1
00:29:01.595  		--rc genhtml_function_coverage=1
00:29:01.595  		--rc genhtml_legend=1
00:29:01.595  		--rc geninfo_all_blocks=1
00:29:01.595  		--rc geninfo_unexecuted_blocks=1
00:29:01.595  		
00:29:01.595  		'
00:29:01.595   13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh
00:29:01.595     13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s
00:29:01.595    13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:29:01.595    13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:29:01.595    13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:29:01.595    13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:29:01.595    13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:29:01.595    13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:29:01.595    13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:29:01.595    13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:29:01.595    13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:29:01.595     13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:29:01.595    13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:29:01.595    13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e
00:29:01.595    13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:29:01.595    13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:29:01.595    13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:29:01.595    13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:29:01.595    13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh
00:29:01.595     13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob
00:29:01.595     13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:29:01.595     13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:29:01.595     13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:29:01.595      13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:29:01.595      13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:29:01.595      13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:29:01.595      13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH
00:29:01.595      13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:29:01.595    13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0
00:29:01.595    13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:29:01.595    13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:29:01.595    13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:29:01.595    13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:29:01.595    13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:29:01.595    13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:29:01.595  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:29:01.595    13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:29:01.595    13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:29:01.595    13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0
00:29:01.595   13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024
00:29:01.595   13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512
00:29:01.595   13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0
00:29:01.595   13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0
00:29:01.595    13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen
00:29:01.595    13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d -
00:29:01.595   13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=24bdd7db6bf14e29b43f5389f3ac4dbf
00:29:01.595   13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit
00:29:01.595   13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z rdma ']'
00:29:01.595   13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:29:01.595   13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs
00:29:01.595   13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no
00:29:01.595   13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns
00:29:01.595   13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:29:01.595   13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:29:01.595    13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:29:01.595   13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:29:01.595   13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:29:01.595   13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable
00:29:01.595   13:55:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=()
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=()
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=()
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=()
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=()
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=()
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=()
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ rdma == rdma ]]
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}")
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}")
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]]
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}")
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)'
00:29:08.155  Found 0000:d9:00.0 (0x15b3 - 0x1015)
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)'
00:29:08.155  Found 0000:d9:00.1 (0x15b3 - 0x1015)
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]]
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0'
00:29:08.155  Found net devices under 0000:d9:00.0: mlx_0_0
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1'
00:29:08.155  Found net devices under 0000:d9:00.1: mlx_0_1
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ rdma == tcp ]]
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@447 -- # [[ rdma == rdma ]]
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # rdma_device_init
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@529 -- # load_ib_rdma_modules
00:29:08.155    13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@62 -- # uname
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']'
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@66 -- # modprobe ib_cm
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@67 -- # modprobe ib_core
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@68 -- # modprobe ib_umad
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@69 -- # modprobe ib_uverbs
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@70 -- # modprobe iw_cm
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@71 -- # modprobe rdma_cm
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@72 -- # modprobe rdma_ucm
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@530 -- # allocate_nic_ips
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR ))
00:29:08.155    13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # get_rdma_if_list
00:29:08.155    13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:29:08.155    13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:29:08.155     13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:29:08.155     13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:29:08.155    13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:29:08.155    13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:29:08.155    13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:29:08.155    13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:29:08.155    13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_0
00:29:08.155    13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2
00:29:08.155    13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:29:08.155    13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:29:08.155    13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:29:08.155    13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:29:08.155    13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:29:08.155    13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_1
00:29:08.155    13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:29:08.155    13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0
00:29:08.155    13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:29:08.155    13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:29:08.155    13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}'
00:29:08.155    13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # ip=192.168.100.8
00:29:08.155   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]]
00:29:08.156   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@85 -- # ip addr show mlx_0_0
00:29:08.156  6: mlx_0_0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:29:08.156      link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff
00:29:08.156      altname enp217s0f0np0
00:29:08.156      altname ens818f0np0
00:29:08.156      inet 192.168.100.8/24 scope global mlx_0_0
00:29:08.156         valid_lft forever preferred_lft forever
00:29:08.156   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:29:08.156    13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1
00:29:08.156    13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:29:08.156    13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:29:08.156    13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}'
00:29:08.156    13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1
00:29:08.156   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # ip=192.168.100.9
00:29:08.156   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]]
00:29:08.156   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@85 -- # ip addr show mlx_0_1
00:29:08.156  7: mlx_0_1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:29:08.156      link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff
00:29:08.156      altname enp217s0f1np1
00:29:08.156      altname ens818f1np1
00:29:08.156      inet 192.168.100.9/24 scope global mlx_0_1
00:29:08.156         valid_lft forever preferred_lft forever
00:29:08.156   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0
00:29:08.156   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:29:08.156   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma'
00:29:08.156   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]]
00:29:08.156    13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # get_available_rdma_ips
00:29:08.156     13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # get_rdma_if_list
00:29:08.156     13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:29:08.156     13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:29:08.156      13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:29:08.156      13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:29:08.156     13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:29:08.156     13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:29:08.156     13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:29:08.156     13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:29:08.156     13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_0
00:29:08.156     13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2
00:29:08.156     13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:29:08.156     13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:29:08.156     13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:29:08.156     13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:29:08.156     13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:29:08.156     13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_1
00:29:08.156     13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2
00:29:08.156    13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:29:08.156    13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0
00:29:08.156    13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:29:08.156    13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:29:08.156    13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}'
00:29:08.156    13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1
00:29:08.156    13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:29:08.156    13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1
00:29:08.156    13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:29:08.156    13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1
00:29:08.156    13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:29:08.156    13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}'
00:29:08.156   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8
00:29:08.156  192.168.100.9'
00:29:08.156    13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # echo '192.168.100.8
00:29:08.156  192.168.100.9'
00:29:08.156    13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # head -n 1
00:29:08.156   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8
00:29:08.156    13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # echo '192.168.100.8
00:29:08.156  192.168.100.9'
00:29:08.156    13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # tail -n +2
00:29:08.156    13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # head -n 1
00:29:08.156   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9
00:29:08.156   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']'
00:29:08.156   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024'
00:29:08.156   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' rdma == tcp ']'
00:29:08.156   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' rdma == rdma ']'
00:29:08.156   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-rdma
00:29:08.156   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1
00:29:08.156   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:29:08.156   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable
00:29:08.156   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:29:08.414   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=3447965
00:29:08.414   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 3447965
00:29:08.414   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1
00:29:08.414   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 3447965 ']'
00:29:08.414   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:29:08.414   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100
00:29:08.414   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:29:08.414  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:29:08.414   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable
00:29:08.414   13:55:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:29:08.414  [2024-12-14 13:55:07.981498] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:29:08.414  [2024-12-14 13:55:07.981591] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:29:08.414  [2024-12-14 13:55:08.113829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:29:08.672  [2024-12-14 13:55:08.210038] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:29:08.672  [2024-12-14 13:55:08.210086] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:29:08.672  [2024-12-14 13:55:08.210098] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:29:08.672  [2024-12-14 13:55:08.210110] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:29:08.672  [2024-12-14 13:55:08.210121] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:29:08.672  [2024-12-14 13:55:08.211380] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:29:09.239   13:55:08 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:29:09.239   13:55:08 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0
00:29:09.239   13:55:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:29:09.239   13:55:08 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable
00:29:09.239   13:55:08 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:29:09.239   13:55:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:29:09.239   13:55:08 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024
00:29:09.239   13:55:08 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:09.239   13:55:08 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:29:09.239  [2024-12-14 13:55:08.854729] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028540/0x7f69b5192940) succeed.
00:29:09.239  [2024-12-14 13:55:08.864037] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000286c0/0x7f69b514e940) succeed.
00:29:09.239   13:55:08 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:09.239   13:55:08 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512
00:29:09.239   13:55:08 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:09.239   13:55:08 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:29:09.239  null0
00:29:09.239   13:55:08 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:09.239   13:55:08 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine
00:29:09.239   13:55:08 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:09.239   13:55:08 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:29:09.239   13:55:08 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:09.239   13:55:08 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a
00:29:09.239   13:55:08 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:09.239   13:55:08 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:29:09.239   13:55:08 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:09.239   13:55:08 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 24bdd7db6bf14e29b43f5389f3ac4dbf
00:29:09.239   13:55:08 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:09.239   13:55:08 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:29:09.239   13:55:08 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:09.239   13:55:08 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420
00:29:09.239   13:55:08 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:09.239   13:55:08 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:29:09.497  [2024-12-14 13:55:08.977443] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 ***
00:29:09.497   13:55:08 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:09.497   13:55:08 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0
00:29:09.497   13:55:08 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:09.497   13:55:08 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:29:09.497  nvme0n1
00:29:09.497   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:09.497   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1
00:29:09.497   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:09.497   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:29:09.497  [
00:29:09.497  {
00:29:09.497  "name": "nvme0n1",
00:29:09.497  "aliases": [
00:29:09.497  "24bdd7db-6bf1-4e29-b43f-5389f3ac4dbf"
00:29:09.497  ],
00:29:09.497  "product_name": "NVMe disk",
00:29:09.497  "block_size": 512,
00:29:09.497  "num_blocks": 2097152,
00:29:09.497  "uuid": "24bdd7db-6bf1-4e29-b43f-5389f3ac4dbf",
00:29:09.497  "numa_id": 1,
00:29:09.497  "assigned_rate_limits": {
00:29:09.497  "rw_ios_per_sec": 0,
00:29:09.497  "rw_mbytes_per_sec": 0,
00:29:09.497  "r_mbytes_per_sec": 0,
00:29:09.497  "w_mbytes_per_sec": 0
00:29:09.497  },
00:29:09.497  "claimed": false,
00:29:09.497  "zoned": false,
00:29:09.497  "supported_io_types": {
00:29:09.497  "read": true,
00:29:09.497  "write": true,
00:29:09.497  "unmap": false,
00:29:09.497  "flush": true,
00:29:09.497  "reset": true,
00:29:09.497  "nvme_admin": true,
00:29:09.497  "nvme_io": true,
00:29:09.497  "nvme_io_md": false,
00:29:09.498  "write_zeroes": true,
00:29:09.498  "zcopy": false,
00:29:09.498  "get_zone_info": false,
00:29:09.498  "zone_management": false,
00:29:09.498  "zone_append": false,
00:29:09.498  "compare": true,
00:29:09.498  "compare_and_write": true,
00:29:09.498  "abort": true,
00:29:09.498  "seek_hole": false,
00:29:09.498  "seek_data": false,
00:29:09.498  "copy": true,
00:29:09.498  "nvme_iov_md": false
00:29:09.498  },
00:29:09.498  "memory_domains": [
00:29:09.498  {
00:29:09.498  "dma_device_id": "SPDK_RDMA_DMA_DEVICE",
00:29:09.498  "dma_device_type": 0
00:29:09.498  }
00:29:09.498  ],
00:29:09.498  "driver_specific": {
00:29:09.498  "nvme": [
00:29:09.498  {
00:29:09.498  "trid": {
00:29:09.498  "trtype": "RDMA",
00:29:09.498  "adrfam": "IPv4",
00:29:09.498  "traddr": "192.168.100.8",
00:29:09.498  "trsvcid": "4420",
00:29:09.498  "subnqn": "nqn.2016-06.io.spdk:cnode0"
00:29:09.498  },
00:29:09.498  "ctrlr_data": {
00:29:09.498  "cntlid": 1,
00:29:09.498  "vendor_id": "0x8086",
00:29:09.498  "model_number": "SPDK bdev Controller",
00:29:09.498  "serial_number": "00000000000000000000",
00:29:09.498  "firmware_revision": "25.01",
00:29:09.498  "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:29:09.498  "oacs": {
00:29:09.498  "security": 0,
00:29:09.498  "format": 0,
00:29:09.498  "firmware": 0,
00:29:09.498  "ns_manage": 0
00:29:09.498  },
00:29:09.498  "multi_ctrlr": true,
00:29:09.498  "ana_reporting": false
00:29:09.498  },
00:29:09.498  "vs": {
00:29:09.498  "nvme_version": "1.3"
00:29:09.498  },
00:29:09.498  "ns_data": {
00:29:09.498  "id": 1,
00:29:09.498  "can_share": true
00:29:09.498  }
00:29:09.498  }
00:29:09.498  ],
00:29:09.498  "mp_policy": "active_passive"
00:29:09.498  }
00:29:09.498  }
00:29:09.498  ]
00:29:09.498   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:09.498   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0
00:29:09.498   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:09.498   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:29:09.498  [2024-12-14 13:55:09.072500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller
00:29:09.498  [2024-12-14 13:55:09.105996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 0
00:29:09.498  [2024-12-14 13:55:09.128281] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful.
00:29:09.498   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:09.498   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1
00:29:09.498   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:09.498   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:29:09.498  [
00:29:09.498  {
00:29:09.498  "name": "nvme0n1",
00:29:09.498  "aliases": [
00:29:09.498  "24bdd7db-6bf1-4e29-b43f-5389f3ac4dbf"
00:29:09.498  ],
00:29:09.498  "product_name": "NVMe disk",
00:29:09.498  "block_size": 512,
00:29:09.498  "num_blocks": 2097152,
00:29:09.498  "uuid": "24bdd7db-6bf1-4e29-b43f-5389f3ac4dbf",
00:29:09.498  "numa_id": 1,
00:29:09.498  "assigned_rate_limits": {
00:29:09.498  "rw_ios_per_sec": 0,
00:29:09.498  "rw_mbytes_per_sec": 0,
00:29:09.498  "r_mbytes_per_sec": 0,
00:29:09.498  "w_mbytes_per_sec": 0
00:29:09.498  },
00:29:09.498  "claimed": false,
00:29:09.498  "zoned": false,
00:29:09.498  "supported_io_types": {
00:29:09.498  "read": true,
00:29:09.498  "write": true,
00:29:09.498  "unmap": false,
00:29:09.498  "flush": true,
00:29:09.498  "reset": true,
00:29:09.498  "nvme_admin": true,
00:29:09.498  "nvme_io": true,
00:29:09.498  "nvme_io_md": false,
00:29:09.498  "write_zeroes": true,
00:29:09.498  "zcopy": false,
00:29:09.498  "get_zone_info": false,
00:29:09.498  "zone_management": false,
00:29:09.498  "zone_append": false,
00:29:09.498  "compare": true,
00:29:09.498  "compare_and_write": true,
00:29:09.498  "abort": true,
00:29:09.498  "seek_hole": false,
00:29:09.498  "seek_data": false,
00:29:09.498  "copy": true,
00:29:09.498  "nvme_iov_md": false
00:29:09.498  },
00:29:09.498  "memory_domains": [
00:29:09.498  {
00:29:09.498  "dma_device_id": "SPDK_RDMA_DMA_DEVICE",
00:29:09.498  "dma_device_type": 0
00:29:09.498  }
00:29:09.498  ],
00:29:09.498  "driver_specific": {
00:29:09.498  "nvme": [
00:29:09.498  {
00:29:09.498  "trid": {
00:29:09.498  "trtype": "RDMA",
00:29:09.498  "adrfam": "IPv4",
00:29:09.498  "traddr": "192.168.100.8",
00:29:09.498  "trsvcid": "4420",
00:29:09.498  "subnqn": "nqn.2016-06.io.spdk:cnode0"
00:29:09.498  },
00:29:09.498  "ctrlr_data": {
00:29:09.498  "cntlid": 2,
00:29:09.498  "vendor_id": "0x8086",
00:29:09.498  "model_number": "SPDK bdev Controller",
00:29:09.498  "serial_number": "00000000000000000000",
00:29:09.498  "firmware_revision": "25.01",
00:29:09.498  "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:29:09.498  "oacs": {
00:29:09.498  "security": 0,
00:29:09.498  "format": 0,
00:29:09.498  "firmware": 0,
00:29:09.498  "ns_manage": 0
00:29:09.498  },
00:29:09.498  "multi_ctrlr": true,
00:29:09.498  "ana_reporting": false
00:29:09.498  },
00:29:09.498  "vs": {
00:29:09.498  "nvme_version": "1.3"
00:29:09.498  },
00:29:09.498  "ns_data": {
00:29:09.498  "id": 1,
00:29:09.498  "can_share": true
00:29:09.498  }
00:29:09.498  }
00:29:09.498  ],
00:29:09.498  "mp_policy": "active_passive"
00:29:09.498  }
00:29:09.498  }
00:29:09.498  ]
00:29:09.498   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:09.498   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:29:09.498   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:09.498   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:29:09.498   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:09.498    13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp
00:29:09.498   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.Mk9yUZwxcx
00:29:09.498   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ:
00:29:09.498   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.Mk9yUZwxcx
00:29:09.498   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.Mk9yUZwxcx
00:29:09.498   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:09.498   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:29:09.498   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:09.498   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable
00:29:09.498   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:09.498   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:29:09.498   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:09.498   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel
00:29:09.498   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:09.498   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:29:09.498  [2024-12-14 13:55:09.220707] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 ***
00:29:09.498   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:09.498   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0
00:29:09.498   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:09.498   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:29:09.498   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:09.498   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0
00:29:09.498   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:09.498   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:29:09.756  [2024-12-14 13:55:09.236748] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental
00:29:09.756  nvme0n1
00:29:09.756   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:09.756   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1
00:29:09.756   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:09.756   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:29:09.756  [
00:29:09.756  {
00:29:09.756  "name": "nvme0n1",
00:29:09.756  "aliases": [
00:29:09.756  "24bdd7db-6bf1-4e29-b43f-5389f3ac4dbf"
00:29:09.756  ],
00:29:09.756  "product_name": "NVMe disk",
00:29:09.756  "block_size": 512,
00:29:09.756  "num_blocks": 2097152,
00:29:09.756  "uuid": "24bdd7db-6bf1-4e29-b43f-5389f3ac4dbf",
00:29:09.756  "numa_id": 1,
00:29:09.756  "assigned_rate_limits": {
00:29:09.756  "rw_ios_per_sec": 0,
00:29:09.756  "rw_mbytes_per_sec": 0,
00:29:09.756  "r_mbytes_per_sec": 0,
00:29:09.756  "w_mbytes_per_sec": 0
00:29:09.756  },
00:29:09.756  "claimed": false,
00:29:09.756  "zoned": false,
00:29:09.756  "supported_io_types": {
00:29:09.756  "read": true,
00:29:09.756  "write": true,
00:29:09.756  "unmap": false,
00:29:09.756  "flush": true,
00:29:09.756  "reset": true,
00:29:09.756  "nvme_admin": true,
00:29:09.756  "nvme_io": true,
00:29:09.756  "nvme_io_md": false,
00:29:09.756  "write_zeroes": true,
00:29:09.756  "zcopy": false,
00:29:09.756  "get_zone_info": false,
00:29:09.756  "zone_management": false,
00:29:09.756  "zone_append": false,
00:29:09.756  "compare": true,
00:29:09.756  "compare_and_write": true,
00:29:09.756  "abort": true,
00:29:09.756  "seek_hole": false,
00:29:09.756  "seek_data": false,
00:29:09.756  "copy": true,
00:29:09.756  "nvme_iov_md": false
00:29:09.756  },
00:29:09.756  "memory_domains": [
00:29:09.756  {
00:29:09.756  "dma_device_id": "SPDK_RDMA_DMA_DEVICE",
00:29:09.756  "dma_device_type": 0
00:29:09.756  }
00:29:09.756  ],
00:29:09.756  "driver_specific": {
00:29:09.756  "nvme": [
00:29:09.756  {
00:29:09.756  "trid": {
00:29:09.756  "trtype": "RDMA",
00:29:09.756  "adrfam": "IPv4",
00:29:09.756  "traddr": "192.168.100.8",
00:29:09.756  "trsvcid": "4421",
00:29:09.756  "subnqn": "nqn.2016-06.io.spdk:cnode0"
00:29:09.756  },
00:29:09.756  "ctrlr_data": {
00:29:09.756  "cntlid": 3,
00:29:09.756  "vendor_id": "0x8086",
00:29:09.756  "model_number": "SPDK bdev Controller",
00:29:09.756  "serial_number": "00000000000000000000",
00:29:09.756  "firmware_revision": "25.01",
00:29:09.756  "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:29:09.756  "oacs": {
00:29:09.756  "security": 0,
00:29:09.756  "format": 0,
00:29:09.756  "firmware": 0,
00:29:09.756  "ns_manage": 0
00:29:09.756  },
00:29:09.756  "multi_ctrlr": true,
00:29:09.756  "ana_reporting": false
00:29:09.756  },
00:29:09.756  "vs": {
00:29:09.756  "nvme_version": "1.3"
00:29:09.756  },
00:29:09.756  "ns_data": {
00:29:09.756  "id": 1,
00:29:09.756  "can_share": true
00:29:09.756  }
00:29:09.756  }
00:29:09.756  ],
00:29:09.756  "mp_policy": "active_passive"
00:29:09.756  }
00:29:09.757  }
00:29:09.757  ]
00:29:09.757   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:09.757   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:29:09.757   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:09.757   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:29:09.757   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:09.757   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.Mk9yUZwxcx
00:29:09.757   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT
00:29:09.757   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini
00:29:09.757   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup
00:29:09.757   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync
00:29:09.757   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' rdma == tcp ']'
00:29:09.757   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' rdma == rdma ']'
00:29:09.757   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e
00:29:09.757   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20}
00:29:09.757   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma
00:29:09.757  rmmod nvme_rdma
00:29:09.757  rmmod nvme_fabrics
00:29:09.757   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:29:09.757   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e
00:29:09.757   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0
00:29:09.757   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 3447965 ']'
00:29:09.757   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 3447965
00:29:09.757   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 3447965 ']'
00:29:09.757   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 3447965
00:29:09.757    13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname
00:29:09.757   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:29:09.757    13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3447965
00:29:09.757   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:29:09.757   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:29:09.757   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3447965'
00:29:09.757  killing process with pid 3447965
00:29:09.757   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 3447965
00:29:09.757   13:55:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 3447965
00:29:11.132   13:55:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:29:11.132   13:55:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]]
00:29:11.132  
00:29:11.132  real	0m9.472s
00:29:11.132  user	0m4.479s
00:29:11.132  sys	0m5.712s
00:29:11.132   13:55:10 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable
00:29:11.132   13:55:10 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:29:11.132  ************************************
00:29:11.132  END TEST nvmf_async_init
00:29:11.132  ************************************
00:29:11.132   13:55:10 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma
00:29:11.132   13:55:10 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:29:11.132   13:55:10 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable
00:29:11.132   13:55:10 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:29:11.132  ************************************
00:29:11.132  START TEST dma
00:29:11.132  ************************************
00:29:11.132   13:55:10 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma
00:29:11.132  * Looking for test storage...
00:29:11.132  * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host
00:29:11.132    13:55:10 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:29:11.132     13:55:10 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version
00:29:11.132     13:55:10 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:29:11.132    13:55:10 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:29:11.132    13:55:10 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:29:11.132    13:55:10 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l
00:29:11.132    13:55:10 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l
00:29:11.132    13:55:10 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-:
00:29:11.132    13:55:10 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1
00:29:11.132    13:55:10 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-:
00:29:11.132    13:55:10 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2
00:29:11.132    13:55:10 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<'
00:29:11.132    13:55:10 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2
00:29:11.132    13:55:10 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1
00:29:11.132    13:55:10 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:29:11.132    13:55:10 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in
00:29:11.132    13:55:10 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@345 -- # : 1
00:29:11.132    13:55:10 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 ))
00:29:11.132    13:55:10 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:29:11.132     13:55:10 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1
00:29:11.132     13:55:10 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1
00:29:11.132     13:55:10 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:29:11.132     13:55:10 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1
00:29:11.132    13:55:10 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1
00:29:11.132     13:55:10 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2
00:29:11.132     13:55:10 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2
00:29:11.132     13:55:10 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:29:11.132     13:55:10 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2
00:29:11.132    13:55:10 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2
00:29:11.132    13:55:10 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:29:11.132    13:55:10 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:29:11.132    13:55:10 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@368 -- # return 0
00:29:11.132    13:55:10 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:29:11.132    13:55:10 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:29:11.132  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:29:11.132  		--rc genhtml_branch_coverage=1
00:29:11.132  		--rc genhtml_function_coverage=1
00:29:11.132  		--rc genhtml_legend=1
00:29:11.132  		--rc geninfo_all_blocks=1
00:29:11.132  		--rc geninfo_unexecuted_blocks=1
00:29:11.132  		
00:29:11.132  		'
00:29:11.132    13:55:10 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:29:11.132  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:29:11.132  		--rc genhtml_branch_coverage=1
00:29:11.132  		--rc genhtml_function_coverage=1
00:29:11.132  		--rc genhtml_legend=1
00:29:11.132  		--rc geninfo_all_blocks=1
00:29:11.132  		--rc geninfo_unexecuted_blocks=1
00:29:11.132  		
00:29:11.132  		'
00:29:11.132    13:55:10 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:29:11.132  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:29:11.132  		--rc genhtml_branch_coverage=1
00:29:11.132  		--rc genhtml_function_coverage=1
00:29:11.132  		--rc genhtml_legend=1
00:29:11.132  		--rc geninfo_all_blocks=1
00:29:11.132  		--rc geninfo_unexecuted_blocks=1
00:29:11.132  		
00:29:11.132  		'
00:29:11.132    13:55:10 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:29:11.132  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:29:11.132  		--rc genhtml_branch_coverage=1
00:29:11.132  		--rc genhtml_function_coverage=1
00:29:11.132  		--rc genhtml_legend=1
00:29:11.132  		--rc geninfo_all_blocks=1
00:29:11.132  		--rc geninfo_unexecuted_blocks=1
00:29:11.132  		
00:29:11.132  		'
00:29:11.132   13:55:10 nvmf_rdma.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh
00:29:11.132     13:55:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s
00:29:11.132    13:55:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:29:11.132    13:55:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:29:11.132    13:55:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:29:11.132    13:55:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:29:11.132    13:55:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:29:11.132    13:55:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:29:11.132    13:55:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:29:11.132    13:55:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:29:11.132    13:55:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:29:11.132     13:55:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:29:11.132    13:55:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:29:11.132    13:55:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e
00:29:11.132    13:55:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:29:11.132    13:55:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:29:11.132    13:55:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:29:11.132    13:55:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:29:11.132    13:55:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh
00:29:11.132     13:55:10 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob
00:29:11.132     13:55:10 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:29:11.132     13:55:10 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:29:11.132     13:55:10 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:29:11.132      13:55:10 nvmf_rdma.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:29:11.132      13:55:10 nvmf_rdma.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:29:11.132      13:55:10 nvmf_rdma.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:29:11.132      13:55:10 nvmf_rdma.nvmf_host.dma -- paths/export.sh@5 -- # export PATH
00:29:11.132      13:55:10 nvmf_rdma.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:29:11.132    13:55:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0
00:29:11.132    13:55:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:29:11.132    13:55:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:29:11.132    13:55:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:29:11.132    13:55:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:29:11.132    13:55:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:29:11.132    13:55:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:29:11.132  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:29:11.132    13:55:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:29:11.132    13:55:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:29:11.132    13:55:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0
00:29:11.132   13:55:10 nvmf_rdma.nvmf_host.dma -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']'
00:29:11.132   13:55:10 nvmf_rdma.nvmf_host.dma -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256
00:29:11.132   13:55:10 nvmf_rdma.nvmf_host.dma -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512
00:29:11.132   13:55:10 nvmf_rdma.nvmf_host.dma -- host/dma.sh@18 -- # subsystem=0
00:29:11.132   13:55:10 nvmf_rdma.nvmf_host.dma -- host/dma.sh@93 -- # nvmftestinit
00:29:11.132   13:55:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@469 -- # '[' -z rdma ']'
00:29:11.132   13:55:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:29:11.132   13:55:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@476 -- # prepare_net_devs
00:29:11.132   13:55:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@438 -- # local -g is_hw=no
00:29:11.132   13:55:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@440 -- # remove_spdk_ns
00:29:11.132   13:55:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:29:11.132   13:55:10 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:29:11.132    13:55:10 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:29:11.132   13:55:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:29:11.132   13:55:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:29:11.132   13:55:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@309 -- # xtrace_disable
00:29:11.132   13:55:10 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x
00:29:17.690   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:29:17.690   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@315 -- # pci_devs=()
00:29:17.690   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@315 -- # local -a pci_devs
00:29:17.690   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@316 -- # pci_net_devs=()
00:29:17.690   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:29:17.690   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@317 -- # pci_drivers=()
00:29:17.690   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@317 -- # local -A pci_drivers
00:29:17.690   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@319 -- # net_devs=()
00:29:17.690   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@319 -- # local -ga net_devs
00:29:17.690   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@320 -- # e810=()
00:29:17.690   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@320 -- # local -ga e810
00:29:17.690   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@321 -- # x722=()
00:29:17.690   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@321 -- # local -ga x722
00:29:17.690   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@322 -- # mlx=()
00:29:17.690   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@322 -- # local -ga mlx
00:29:17.690   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:29:17.690   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:29:17.690   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:29:17.690   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:29:17.690   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:29:17.690   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:29:17.690   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:29:17.690   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:29:17.690   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:29:17.690   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:29:17.690   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:29:17.690   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:29:17.690   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:29:17.690   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@347 -- # [[ rdma == rdma ]]
00:29:17.690   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}")
00:29:17.690   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}")
00:29:17.690   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]]
00:29:17.690   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}")
00:29:17.690   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:29:17.690   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:29:17.690   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)'
00:29:17.690  Found 0000:d9:00.0 (0x15b3 - 0x1015)
00:29:17.690   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:29:17.690   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:29:17.690   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:29:17.690   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:29:17.690   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:29:17.690   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:29:17.690   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:29:17.690   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)'
00:29:17.690  Found 0000:d9:00.1 (0x15b3 - 0x1015)
00:29:17.690   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:29:17.690   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:29:17.690   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:29:17.690   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:29:17.690   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:29:17.690   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:29:17.690   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:29:17.690   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]]
00:29:17.690   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:29:17.690   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:29:17.690   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:29:17.690   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:29:17.690   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:29:17.690   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0'
00:29:17.690  Found net devices under 0000:d9:00.0: mlx_0_0
00:29:17.690   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:29:17.690   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:29:17.690   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:29:17.690   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:29:17.690   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:29:17.690   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:29:17.690   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1'
00:29:17.690  Found net devices under 0000:d9:00.1: mlx_0_1
00:29:17.691   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:29:17.691   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:29:17.691   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@442 -- # is_hw=yes
00:29:17.691   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:29:17.691   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@445 -- # [[ rdma == tcp ]]
00:29:17.691   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@447 -- # [[ rdma == rdma ]]
00:29:17.691   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@448 -- # rdma_device_init
00:29:17.691   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@529 -- # load_ib_rdma_modules
00:29:17.691    13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@62 -- # uname
00:29:17.691   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']'
00:29:17.691   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@66 -- # modprobe ib_cm
00:29:17.691   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@67 -- # modprobe ib_core
00:29:17.691   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@68 -- # modprobe ib_umad
00:29:17.691   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@69 -- # modprobe ib_uverbs
00:29:17.691   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@70 -- # modprobe iw_cm
00:29:17.691   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@71 -- # modprobe rdma_cm
00:29:17.691   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@72 -- # modprobe rdma_ucm
00:29:17.691   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@530 -- # allocate_nic_ips
00:29:17.691   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR ))
00:29:17.691    13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # get_rdma_if_list
00:29:17.691    13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:29:17.691    13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:29:17.691     13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:29:17.691     13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:29:17.691    13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:29:17.691    13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:29:17.691    13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:29:17.691    13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:29:17.691    13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_0
00:29:17.691    13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2
00:29:17.691    13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:29:17.691    13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:29:17.691    13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:29:17.691    13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:29:17.691    13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:29:17.691    13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_1
00:29:17.691    13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2
00:29:17.691   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:29:17.691    13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0
00:29:17.691    13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:29:17.691    13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}'
00:29:17.691    13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1
00:29:17.691    13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:29:17.691   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # ip=192.168.100.8
00:29:17.691   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]]
00:29:17.691   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@85 -- # ip addr show mlx_0_0
00:29:17.691  6: mlx_0_0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:29:17.691      link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff
00:29:17.691      altname enp217s0f0np0
00:29:17.691      altname ens818f0np0
00:29:17.691      inet 192.168.100.8/24 scope global mlx_0_0
00:29:17.691         valid_lft forever preferred_lft forever
00:29:17.691   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:29:17.691    13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1
00:29:17.691    13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:29:17.691    13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:29:17.691    13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}'
00:29:17.691    13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1
00:29:17.691   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # ip=192.168.100.9
00:29:17.691   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]]
00:29:17.691   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@85 -- # ip addr show mlx_0_1
00:29:17.691  7: mlx_0_1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:29:17.691      link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff
00:29:17.691      altname enp217s0f1np1
00:29:17.691      altname ens818f1np1
00:29:17.691      inet 192.168.100.9/24 scope global mlx_0_1
00:29:17.691         valid_lft forever preferred_lft forever
00:29:17.691   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@450 -- # return 0
00:29:17.691   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:29:17.691   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma'
00:29:17.691   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]]
00:29:17.691    13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@484 -- # get_available_rdma_ips
00:29:17.691     13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # get_rdma_if_list
00:29:17.691     13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:29:17.691     13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:29:17.691      13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:29:17.691      13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:29:17.691     13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:29:17.691     13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:29:17.691     13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:29:17.691     13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:29:17.691     13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_0
00:29:17.691     13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2
00:29:17.691     13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:29:17.691     13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:29:17.691     13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:29:17.691     13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:29:17.691     13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:29:17.691     13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_1
00:29:17.691     13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2
00:29:17.691    13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:29:17.691    13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0
00:29:17.691    13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:29:17.691    13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:29:17.691    13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}'
00:29:17.691    13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1
00:29:17.691    13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:29:17.691    13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1
00:29:17.691    13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:29:17.691    13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:29:17.691    13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}'
00:29:17.691    13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1
00:29:17.691   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8
00:29:17.691  192.168.100.9'
00:29:17.691    13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@485 -- # echo '192.168.100.8
00:29:17.691  192.168.100.9'
00:29:17.691    13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@485 -- # head -n 1
00:29:17.691   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8
00:29:17.691    13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # echo '192.168.100.8
00:29:17.691  192.168.100.9'
00:29:17.691    13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # tail -n +2
00:29:17.691    13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # head -n 1
00:29:17.691   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9
00:29:17.691   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']'
00:29:17.691   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024'
00:29:17.691   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@496 -- # '[' rdma == tcp ']'
00:29:17.691   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@496 -- # '[' rdma == rdma ']'
00:29:17.691   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@502 -- # modprobe nvme-rdma
00:29:17.691   13:55:17 nvmf_rdma.nvmf_host.dma -- host/dma.sh@94 -- # nvmfappstart -m 0x3
00:29:17.691   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:29:17.691   13:55:17 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@726 -- # xtrace_disable
00:29:17.691   13:55:17 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x
00:29:17.691   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@509 -- # nvmfpid=3451677
00:29:17.691   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@510 -- # waitforlisten 3451677
00:29:17.691   13:55:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3
00:29:17.691   13:55:17 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@835 -- # '[' -z 3451677 ']'
00:29:17.691   13:55:17 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:29:17.691   13:55:17 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@840 -- # local max_retries=100
00:29:17.691   13:55:17 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:29:17.691  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:29:17.691   13:55:17 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@844 -- # xtrace_disable
00:29:17.691   13:55:17 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x
00:29:17.950  [2024-12-14 13:55:17.498454] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:29:17.950  [2024-12-14 13:55:17.498550] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:29:17.950  [2024-12-14 13:55:17.631992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:29:18.208  [2024-12-14 13:55:17.726140] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:29:18.208  [2024-12-14 13:55:17.726184] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:29:18.208  [2024-12-14 13:55:17.726196] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:29:18.208  [2024-12-14 13:55:17.726208] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:29:18.208  [2024-12-14 13:55:17.726217] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:29:18.208  [2024-12-14 13:55:17.728191] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:29:18.208  [2024-12-14 13:55:17.728198] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:29:18.773   13:55:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:29:18.773   13:55:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@868 -- # return 0
00:29:18.773   13:55:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:29:18.773   13:55:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@732 -- # xtrace_disable
00:29:18.773   13:55:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x
00:29:18.773   13:55:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:29:18.773   13:55:18 nvmf_rdma.nvmf_host.dma -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024
00:29:18.773   13:55:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:18.773   13:55:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x
00:29:18.773  [2024-12-14 13:55:18.372025] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028540/0x7f722bb31940) succeed.
00:29:18.773  [2024-12-14 13:55:18.381279] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000286c0/0x7f722b1bd940) succeed.
00:29:19.031   13:55:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:19.031   13:55:18 nvmf_rdma.nvmf_host.dma -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0
00:29:19.031   13:55:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:19.031   13:55:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x
00:29:19.031  Malloc0
00:29:19.031   13:55:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:19.031   13:55:18 nvmf_rdma.nvmf_host.dma -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001
00:29:19.031   13:55:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:19.031   13:55:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x
00:29:19.289   13:55:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:19.289   13:55:18 nvmf_rdma.nvmf_host.dma -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0
00:29:19.289   13:55:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:19.289   13:55:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x
00:29:19.289   13:55:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:19.289   13:55:18 nvmf_rdma.nvmf_host.dma -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420
00:29:19.289   13:55:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:19.289   13:55:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x
00:29:19.289  [2024-12-14 13:55:18.787597] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 ***
00:29:19.289   13:55:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:19.289   13:55:18 nvmf_rdma.nvmf_host.dma -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate
00:29:19.289    13:55:18 nvmf_rdma.nvmf_host.dma -- host/dma.sh@104 -- # gen_nvmf_target_json 0
00:29:19.289    13:55:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@560 -- # config=()
00:29:19.289    13:55:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@560 -- # local subsystem config
00:29:19.289    13:55:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:29:19.289    13:55:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:29:19.289  {
00:29:19.289    "params": {
00:29:19.289      "name": "Nvme$subsystem",
00:29:19.289      "trtype": "$TEST_TRANSPORT",
00:29:19.289      "traddr": "$NVMF_FIRST_TARGET_IP",
00:29:19.289      "adrfam": "ipv4",
00:29:19.289      "trsvcid": "$NVMF_PORT",
00:29:19.289      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:29:19.289      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:29:19.289      "hdgst": ${hdgst:-false},
00:29:19.289      "ddgst": ${ddgst:-false}
00:29:19.289    },
00:29:19.289    "method": "bdev_nvme_attach_controller"
00:29:19.289  }
00:29:19.289  EOF
00:29:19.289  )")
00:29:19.289     13:55:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@582 -- # cat
00:29:19.289    13:55:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@584 -- # jq .
00:29:19.289     13:55:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@585 -- # IFS=,
00:29:19.289     13:55:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:29:19.289    "params": {
00:29:19.289      "name": "Nvme0",
00:29:19.289      "trtype": "rdma",
00:29:19.289      "traddr": "192.168.100.8",
00:29:19.289      "adrfam": "ipv4",
00:29:19.289      "trsvcid": "4420",
00:29:19.289      "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:29:19.289      "hostnqn": "nqn.2016-06.io.spdk:host0",
00:29:19.289      "hdgst": false,
00:29:19.289      "ddgst": false
00:29:19.289    },
00:29:19.289    "method": "bdev_nvme_attach_controller"
00:29:19.289  }'
00:29:19.289  [2024-12-14 13:55:18.875141] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:29:19.289  [2024-12-14 13:55:18.875225] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3451961 ]
00:29:19.289  [2024-12-14 13:55:19.000694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:29:19.547  [2024-12-14 13:55:19.104576] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:29:19.547  [2024-12-14 13:55:19.104585] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:29:26.229  bdev Nvme0n1 reports 1 memory domains
00:29:26.229  bdev Nvme0n1 supports RDMA memory domain
00:29:26.229  Initialization complete, running randrw IO for 5 sec on 2 cores
00:29:26.229  ==========================================================================
00:29:26.229                                             Latency [us]
00:29:26.229                 IOPS      MiB/s    Average        min        max
00:29:26.229  Core  2:   19291.89      75.36     828.69     285.33   12771.68
00:29:26.229  Core  3:   19150.31      74.81     834.80     275.59   12497.87
00:29:26.229  ==========================================================================
00:29:26.229  Total  :   38442.20     150.16     831.73     275.59   12771.68
00:29:26.229  
00:29:26.229  Total operations: 192228, translate 192228 pull_push 0 memzero 0
00:29:26.229   13:55:25 nvmf_rdma.nvmf_host.dma -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push
00:29:26.229    13:55:25 nvmf_rdma.nvmf_host.dma -- host/dma.sh@107 -- # gen_malloc_json
00:29:26.229    13:55:25 nvmf_rdma.nvmf_host.dma -- host/dma.sh@21 -- # jq .
00:29:26.229  [2024-12-14 13:55:25.500676] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:29:26.229  [2024-12-14 13:55:25.500776] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3453030 ]
00:29:26.229  [2024-12-14 13:55:25.630165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:29:26.229  [2024-12-14 13:55:25.734862] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:29:26.229  [2024-12-14 13:55:25.734871] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:29:32.787  bdev Malloc0 reports 2 memory domains
00:29:32.787  bdev Malloc0 doesn't support RDMA memory domain
00:29:32.787  Initialization complete, running randrw IO for 5 sec on 2 cores
00:29:32.787  ==========================================================================
00:29:32.787                                             Latency [us]
00:29:32.787                 IOPS      MiB/s    Average        min        max
00:29:32.787  Core  2:   12075.91      47.17    1324.08     445.79    2266.64
00:29:32.787  Core  3:   12339.01      48.20    1295.81     436.53    2597.83
00:29:32.787  ==========================================================================
00:29:32.787  Total  :   24414.92      95.37    1309.79     436.53    2597.83
00:29:32.787  
00:29:32.787  Total operations: 122124, translate 0 pull_push 488496 memzero 0
00:29:32.787   13:55:32 nvmf_rdma.nvmf_host.dma -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero
00:29:32.787    13:55:32 nvmf_rdma.nvmf_host.dma -- host/dma.sh@110 -- # gen_lvol_nvme_json 0
00:29:32.787    13:55:32 nvmf_rdma.nvmf_host.dma -- host/dma.sh@48 -- # local subsystem=0
00:29:32.787    13:55:32 nvmf_rdma.nvmf_host.dma -- host/dma.sh@50 -- # jq .
00:29:32.787  Ignoring -M option
00:29:32.787  [2024-12-14 13:55:32.470265] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:29:32.787  [2024-12-14 13:55:32.470377] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3454102 ]
00:29:33.046  [2024-12-14 13:55:32.599842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:29:33.046  [2024-12-14 13:55:32.701800] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:29:33.046  [2024-12-14 13:55:32.701809] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:29:39.602  bdev 24be504f-29ca-4dec-aa51-f8f92b3da344 reports 1 memory domains
00:29:39.602  bdev 24be504f-29ca-4dec-aa51-f8f92b3da344 supports RDMA memory domain
00:29:39.602  Initialization complete, running randread IO for 5 sec on 2 cores
00:29:39.602  ==========================================================================
00:29:39.602                                             Latency [us]
00:29:39.602                 IOPS      MiB/s    Average        min        max
00:29:39.602  Core  2:   61485.35     240.18     259.34      90.97    2056.78
00:29:39.602  Core  3:   63607.29     248.47     250.68      83.57    2057.41
00:29:39.602  ==========================================================================
00:29:39.602  Total  :  125092.64     488.64     254.93      83.57    2057.41
00:29:39.602  
00:29:39.602  Total operations: 625539, translate 0 pull_push 0 memzero 625539
00:29:39.602   13:55:39 nvmf_rdma.nvmf_host.dma -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420'
00:29:39.602  [2024-12-14 13:55:39.259893] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem.  This behavior is deprecated and will be removed in a future release.
00:29:42.131  Initializing NVMe Controllers
00:29:42.131  Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0
00:29:42.131  Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0
00:29:42.131  Initialization complete. Launching workers.
00:29:42.131  ========================================================
00:29:42.131                                                                                                                     Latency(us)
00:29:42.131  Device Information                                                             :       IOPS      MiB/s    Average        min        max
00:29:42.131  RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core  0:    2016.00       7.88    7972.65    5460.97   10976.78
00:29:42.131  ========================================================
00:29:42.131  Total                                                                          :    2016.00       7.88    7972.65    5460.97   10976.78
00:29:42.131  
00:29:42.131   13:55:41 nvmf_rdma.nvmf_host.dma -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate
00:29:42.131    13:55:41 nvmf_rdma.nvmf_host.dma -- host/dma.sh@116 -- # gen_lvol_nvme_json 0
00:29:42.131    13:55:41 nvmf_rdma.nvmf_host.dma -- host/dma.sh@48 -- # local subsystem=0
00:29:42.131    13:55:41 nvmf_rdma.nvmf_host.dma -- host/dma.sh@50 -- # jq .
00:29:42.131  [2024-12-14 13:55:41.743033] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:29:42.131  [2024-12-14 13:55:41.743152] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3455690 ]
00:29:42.389  [2024-12-14 13:55:41.872182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:29:42.389  [2024-12-14 13:55:41.978092] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:29:42.389  [2024-12-14 13:55:41.978098] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:29:48.947  bdev d35381a6-4863-4dea-abae-a13c1fe8261f reports 1 memory domains
00:29:48.947  bdev d35381a6-4863-4dea-abae-a13c1fe8261f supports RDMA memory domain
00:29:48.947  Initialization complete, running randrw IO for 5 sec on 2 cores
00:29:48.947  ==========================================================================
00:29:48.947                                             Latency [us]
00:29:48.947                 IOPS      MiB/s    Average        min        max
00:29:48.947  Core  2:   16998.23      66.40     940.59      16.29    8013.33
00:29:48.947  Core  3:   16678.72      65.15     958.58      13.78    7391.04
00:29:48.947  ==========================================================================
00:29:48.947  Total  :   33676.95     131.55     949.50      13.78    8013.33
00:29:48.947  
00:29:48.947  Total operations: 168432, translate 168288 pull_push 0 memzero 144
00:29:48.947   13:55:48 nvmf_rdma.nvmf_host.dma -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT
00:29:48.947   13:55:48 nvmf_rdma.nvmf_host.dma -- host/dma.sh@120 -- # nvmftestfini
00:29:48.947   13:55:48 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@516 -- # nvmfcleanup
00:29:48.947   13:55:48 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@121 -- # sync
00:29:48.947   13:55:48 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@123 -- # '[' rdma == tcp ']'
00:29:48.947   13:55:48 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@123 -- # '[' rdma == rdma ']'
00:29:48.947   13:55:48 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@124 -- # set +e
00:29:48.947   13:55:48 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@125 -- # for i in {1..20}
00:29:48.947   13:55:48 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma
00:29:48.947  rmmod nvme_rdma
00:29:48.947  rmmod nvme_fabrics
00:29:48.947   13:55:48 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:29:48.947   13:55:48 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@128 -- # set -e
00:29:48.947   13:55:48 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@129 -- # return 0
00:29:48.947   13:55:48 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@517 -- # '[' -n 3451677 ']'
00:29:48.947   13:55:48 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@518 -- # killprocess 3451677
00:29:48.947   13:55:48 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@954 -- # '[' -z 3451677 ']'
00:29:48.947   13:55:48 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@958 -- # kill -0 3451677
00:29:48.947    13:55:48 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@959 -- # uname
00:29:48.947   13:55:48 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:29:48.947    13:55:48 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3451677
00:29:48.947   13:55:48 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:29:48.947   13:55:48 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:29:48.947   13:55:48 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3451677'
00:29:48.947  killing process with pid 3451677
00:29:48.947   13:55:48 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@973 -- # kill 3451677
00:29:48.947   13:55:48 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@978 -- # wait 3451677
00:29:50.847   13:55:50 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:29:50.847   13:55:50 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]]
00:29:50.847  
00:29:50.847  real	0m39.964s
00:29:50.847  user	1m57.560s
00:29:50.847  sys	0m6.973s
00:29:50.847   13:55:50 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable
00:29:50.847   13:55:50 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x
00:29:50.847  ************************************
00:29:50.847  END TEST dma
00:29:50.847  ************************************
00:29:51.105   13:55:50 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma
00:29:51.105   13:55:50 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:29:51.105   13:55:50 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable
00:29:51.105   13:55:50 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:29:51.105  ************************************
00:29:51.106  START TEST nvmf_identify
00:29:51.106  ************************************
00:29:51.106   13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma
00:29:51.106  * Looking for test storage...
00:29:51.106  * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host
00:29:51.106    13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:29:51.106     13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version
00:29:51.106     13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:29:51.106    13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:29:51.106    13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:29:51.106    13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l
00:29:51.106    13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l
00:29:51.106    13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-:
00:29:51.106    13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1
00:29:51.106    13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-:
00:29:51.106    13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2
00:29:51.106    13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<'
00:29:51.106    13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2
00:29:51.106    13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1
00:29:51.106    13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:29:51.106    13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in
00:29:51.106    13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1
00:29:51.106    13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 ))
00:29:51.106    13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:29:51.106     13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1
00:29:51.106     13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1
00:29:51.106     13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:29:51.106     13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1
00:29:51.106    13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1
00:29:51.106     13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2
00:29:51.106     13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2
00:29:51.106     13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:29:51.106     13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2
00:29:51.106    13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2
00:29:51.106    13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:29:51.106    13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:29:51.106    13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0
00:29:51.106    13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:29:51.106    13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:29:51.106  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:29:51.106  		--rc genhtml_branch_coverage=1
00:29:51.106  		--rc genhtml_function_coverage=1
00:29:51.106  		--rc genhtml_legend=1
00:29:51.106  		--rc geninfo_all_blocks=1
00:29:51.106  		--rc geninfo_unexecuted_blocks=1
00:29:51.106  		
00:29:51.106  		'
00:29:51.106    13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:29:51.106  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:29:51.106  		--rc genhtml_branch_coverage=1
00:29:51.106  		--rc genhtml_function_coverage=1
00:29:51.106  		--rc genhtml_legend=1
00:29:51.106  		--rc geninfo_all_blocks=1
00:29:51.106  		--rc geninfo_unexecuted_blocks=1
00:29:51.106  		
00:29:51.106  		'
00:29:51.106    13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:29:51.106  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:29:51.106  		--rc genhtml_branch_coverage=1
00:29:51.106  		--rc genhtml_function_coverage=1
00:29:51.106  		--rc genhtml_legend=1
00:29:51.106  		--rc geninfo_all_blocks=1
00:29:51.106  		--rc geninfo_unexecuted_blocks=1
00:29:51.106  		
00:29:51.106  		'
00:29:51.106    13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:29:51.106  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:29:51.106  		--rc genhtml_branch_coverage=1
00:29:51.106  		--rc genhtml_function_coverage=1
00:29:51.106  		--rc genhtml_legend=1
00:29:51.106  		--rc geninfo_all_blocks=1
00:29:51.106  		--rc geninfo_unexecuted_blocks=1
00:29:51.106  		
00:29:51.106  		'
00:29:51.106   13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh
00:29:51.106     13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s
00:29:51.106    13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:29:51.106    13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:29:51.106    13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:29:51.106    13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:29:51.106    13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:29:51.106    13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:29:51.106    13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:29:51.106    13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:29:51.106    13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:29:51.106     13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:29:51.106    13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:29:51.106    13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e
00:29:51.106    13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:29:51.106    13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:29:51.106    13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:29:51.106    13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:29:51.106    13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh
00:29:51.106     13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob
00:29:51.365     13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:29:51.365     13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:29:51.365     13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:29:51.365      13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:29:51.365      13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:29:51.365      13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:29:51.365      13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH
00:29:51.365      13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:29:51.365    13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0
00:29:51.365    13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:29:51.365    13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:29:51.365    13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:29:51.365    13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:29:51.365    13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:29:51.365    13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:29:51.365  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:29:51.365    13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:29:51.365    13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:29:51.365    13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0
00:29:51.365   13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64
00:29:51.365   13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:29:51.365   13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit
00:29:51.365   13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z rdma ']'
00:29:51.365   13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:29:51.365   13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs
00:29:51.365   13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no
00:29:51.365   13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns
00:29:51.365   13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:29:51.365   13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:29:51.365    13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:29:51.365   13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:29:51.365   13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:29:51.365   13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable
00:29:51.365   13:55:50 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x
00:29:57.920   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:29:57.920   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=()
00:29:57.920   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs
00:29:57.920   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=()
00:29:57.920   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:29:57.920   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=()
00:29:57.920   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers
00:29:57.920   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=()
00:29:57.920   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs
00:29:57.920   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=()
00:29:57.920   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810
00:29:57.920   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=()
00:29:57.920   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722
00:29:57.920   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=()
00:29:57.920   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx
00:29:57.920   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:29:57.920   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:29:57.920   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:29:57.920   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:29:57.920   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:29:57.920   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:29:57.920   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:29:57.920   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ rdma == rdma ]]
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}")
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}")
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]]
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}")
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)'
00:29:57.921  Found 0000:d9:00.0 (0x15b3 - 0x1015)
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)'
00:29:57.921  Found 0000:d9:00.1 (0x15b3 - 0x1015)
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]]
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0'
00:29:57.921  Found net devices under 0000:d9:00.0: mlx_0_0
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1'
00:29:57.921  Found net devices under 0000:d9:00.1: mlx_0_1
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ rdma == tcp ]]
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@447 -- # [[ rdma == rdma ]]
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # rdma_device_init
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@529 -- # load_ib_rdma_modules
00:29:57.921    13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@62 -- # uname
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']'
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@66 -- # modprobe ib_cm
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@67 -- # modprobe ib_core
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@68 -- # modprobe ib_umad
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@69 -- # modprobe ib_uverbs
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@70 -- # modprobe iw_cm
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@71 -- # modprobe rdma_cm
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@72 -- # modprobe rdma_ucm
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@530 -- # allocate_nic_ips
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR ))
00:29:57.921    13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # get_rdma_if_list
00:29:57.921    13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:29:57.921    13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:29:57.921     13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:29:57.921     13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:29:57.921    13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:29:57.921    13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:29:57.921    13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:29:57.921    13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:29:57.921    13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_0
00:29:57.921    13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2
00:29:57.921    13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:29:57.921    13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:29:57.921    13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:29:57.921    13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:29:57.921    13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:29:57.921    13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_1
00:29:57.921    13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:29:57.921    13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0
00:29:57.921    13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:29:57.921    13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:29:57.921    13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1
00:29:57.921    13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}'
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # ip=192.168.100.8
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]]
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@85 -- # ip addr show mlx_0_0
00:29:57.921  6: mlx_0_0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:29:57.921      link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff
00:29:57.921      altname enp217s0f0np0
00:29:57.921      altname ens818f0np0
00:29:57.921      inet 192.168.100.8/24 scope global mlx_0_0
00:29:57.921         valid_lft forever preferred_lft forever
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:29:57.921    13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1
00:29:57.921    13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:29:57.921    13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:29:57.921    13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}'
00:29:57.921    13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # ip=192.168.100.9
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]]
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@85 -- # ip addr show mlx_0_1
00:29:57.921  7: mlx_0_1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:29:57.921      link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff
00:29:57.921      altname enp217s0f1np1
00:29:57.921      altname ens818f1np1
00:29:57.921      inet 192.168.100.9/24 scope global mlx_0_1
00:29:57.921         valid_lft forever preferred_lft forever
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma'
00:29:57.921   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]]
00:29:57.921    13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@484 -- # get_available_rdma_ips
00:29:57.921     13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # get_rdma_if_list
00:29:57.921     13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:29:57.921     13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:29:57.921      13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:29:57.921      13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:29:57.921     13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:29:57.921     13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:29:57.921     13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:29:57.921     13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:29:57.922     13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_0
00:29:57.922     13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2
00:29:57.922     13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:29:57.922     13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:29:57.922     13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:29:57.922     13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:29:57.922     13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:29:57.922     13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_1
00:29:57.922     13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2
00:29:57.922    13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:29:57.922    13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0
00:29:57.922    13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:29:57.922    13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:29:57.922    13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1
00:29:57.922    13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}'
00:29:57.922    13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:29:57.922    13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1
00:29:57.922    13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:29:57.922    13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:29:57.922    13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}'
00:29:57.922    13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1
00:29:57.922   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8
00:29:57.922  192.168.100.9'
00:29:57.922    13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # echo '192.168.100.8
00:29:57.922  192.168.100.9'
00:29:57.922    13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # head -n 1
00:29:57.922   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8
00:29:57.922    13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # echo '192.168.100.8
00:29:57.922  192.168.100.9'
00:29:57.922    13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # tail -n +2
00:29:57.922    13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # head -n 1
00:29:57.922   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9
00:29:57.922   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']'
00:29:57.922   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024'
00:29:57.922   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' rdma == tcp ']'
00:29:57.922   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' rdma == rdma ']'
00:29:57.922   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-rdma
00:29:57.922   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt
00:29:57.922   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable
00:29:57.922   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x
00:29:57.922   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3460437
00:29:57.922   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:29:57.922   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:29:57.922   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3460437
00:29:57.922   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 3460437 ']'
00:29:57.922   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:29:57.922   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100
00:29:57.922   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:29:57.922  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:29:57.922   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable
00:29:57.922   13:55:57 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x
00:29:57.922  [2024-12-14 13:55:57.583351] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:29:57.922  [2024-12-14 13:55:57.583454] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:29:58.179  [2024-12-14 13:55:57.718260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:29:58.179  [2024-12-14 13:55:57.819055] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:29:58.179  [2024-12-14 13:55:57.819114] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:29:58.179  [2024-12-14 13:55:57.819126] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:29:58.179  [2024-12-14 13:55:57.819155] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:29:58.179  [2024-12-14 13:55:57.819165] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:29:58.179  [2024-12-14 13:55:57.821616] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:29:58.179  [2024-12-14 13:55:57.821690] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:29:58.179  [2024-12-14 13:55:57.821795] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:29:58.179  [2024-12-14 13:55:57.821803] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:29:58.744   13:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:29:58.744   13:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0
00:29:58.744   13:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192
00:29:58.744   13:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:58.744   13:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x
00:29:58.744  [2024-12-14 13:55:58.436792] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028540/0x7ffbf3b61940) succeed.
00:29:58.744  [2024-12-14 13:55:58.446962] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000286c0/0x7ffbf3b1d940) succeed.
00:29:59.001   13:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:59.002   13:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt
00:29:59.002   13:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable
00:29:59.002   13:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x
00:29:59.258   13:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0
00:29:59.258   13:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:59.258   13:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x
00:29:59.258  Malloc0
00:29:59.258   13:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:59.258   13:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:29:59.258   13:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:59.258   13:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x
00:29:59.258   13:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:59.258   13:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789
00:29:59.258   13:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:59.258   13:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x
00:29:59.258   13:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:59.258   13:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420
00:29:59.258   13:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:59.258   13:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x
00:29:59.258  [2024-12-14 13:55:58.843043] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 ***
00:29:59.258   13:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:59.258   13:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420
00:29:59.258   13:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:59.258   13:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x
00:29:59.258   13:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:59.258   13:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems
00:29:59.258   13:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:59.258   13:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x
00:29:59.258  [
00:29:59.258  {
00:29:59.258  "nqn": "nqn.2014-08.org.nvmexpress.discovery",
00:29:59.258  "subtype": "Discovery",
00:29:59.258  "listen_addresses": [
00:29:59.258  {
00:29:59.258  "trtype": "RDMA",
00:29:59.258  "adrfam": "IPv4",
00:29:59.258  "traddr": "192.168.100.8",
00:29:59.258  "trsvcid": "4420"
00:29:59.258  }
00:29:59.258  ],
00:29:59.258  "allow_any_host": true,
00:29:59.258  "hosts": []
00:29:59.258  },
00:29:59.258  {
00:29:59.258  "nqn": "nqn.2016-06.io.spdk:cnode1",
00:29:59.258  "subtype": "NVMe",
00:29:59.258  "listen_addresses": [
00:29:59.258  {
00:29:59.258  "trtype": "RDMA",
00:29:59.258  "adrfam": "IPv4",
00:29:59.258  "traddr": "192.168.100.8",
00:29:59.258  "trsvcid": "4420"
00:29:59.258  }
00:29:59.258  ],
00:29:59.258  "allow_any_host": true,
00:29:59.258  "hosts": [],
00:29:59.258  "serial_number": "SPDK00000000000001",
00:29:59.258  "model_number": "SPDK bdev Controller",
00:29:59.258  "max_namespaces": 32,
00:29:59.258  "min_cntlid": 1,
00:29:59.258  "max_cntlid": 65519,
00:29:59.258  "namespaces": [
00:29:59.258  {
00:29:59.258  "nsid": 1,
00:29:59.258  "bdev_name": "Malloc0",
00:29:59.258  "name": "Malloc0",
00:29:59.258  "nguid": "ABCDEF0123456789ABCDEF0123456789",
00:29:59.258  "eui64": "ABCDEF0123456789",
00:29:59.258  "uuid": "f1450eeb-5f61-41a2-9c16-f1f5ca067973"
00:29:59.258  }
00:29:59.258  ]
00:29:59.258  }
00:29:59.258  ]
00:29:59.258   13:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:59.258   13:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r '        trtype:rdma         adrfam:IPv4         traddr:192.168.100.8         trsvcid:4420         subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all
00:29:59.258  [2024-12-14 13:55:58.926859] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:29:59.258  [2024-12-14 13:55:58.926933] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3460725 ]
00:29:59.518  [2024-12-14 13:55:59.011154] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout)
00:29:59.518  [2024-12-14 13:55:59.011237] nvme_rdma.c:2017:nvme_rdma_ctrlr_create_qpair: *DEBUG*: rqpair 0x2000003d6ec0, append_copy diabled
00:29:59.518  [2024-12-14 13:55:59.011274] nvme_rdma.c:2460:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr
00:29:59.518  [2024-12-14 13:55:59.011299] nvme_rdma.c:1238:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2
00:29:59.518  [2024-12-14 13:55:59.011309] nvme_rdma.c:1242:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420
00:29:59.518  [2024-12-14 13:55:59.011353] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout)
00:29:59.518  [2024-12-14 13:55:59.022418] nvme_rdma.c: 459:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32.
00:29:59.518  [2024-12-14 13:55:59.032899] nvme_rdma.c:1124:nvme_rdma_connect_established: *DEBUG*: rc =0
00:29:59.518  [2024-12-14 13:55:59.032920] nvme_rdma.c:1129:nvme_rdma_connect_established: *DEBUG*: RDMA requests created
00:29:59.518  [2024-12-14 13:55:59.032943] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd180 length 0x10 lkey 0x181c00
00:29:59.518  [2024-12-14 13:55:59.032958] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd1a8 length 0x10 lkey 0x181c00
00:29:59.518  [2024-12-14 13:55:59.032971] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd1d0 length 0x10 lkey 0x181c00
00:29:59.518  [2024-12-14 13:55:59.032983] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd1f8 length 0x10 lkey 0x181c00
00:29:59.518  [2024-12-14 13:55:59.032991] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd220 length 0x10 lkey 0x181c00
00:29:59.518  [2024-12-14 13:55:59.033000] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd248 length 0x10 lkey 0x181c00
00:29:59.518  [2024-12-14 13:55:59.033008] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd270 length 0x10 lkey 0x181c00
00:29:59.518  [2024-12-14 13:55:59.033018] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd298 length 0x10 lkey 0x181c00
00:29:59.518  [2024-12-14 13:55:59.033026] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd2c0 length 0x10 lkey 0x181c00
00:29:59.518  [2024-12-14 13:55:59.033035] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd2e8 length 0x10 lkey 0x181c00
00:29:59.518  [2024-12-14 13:55:59.033043] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd310 length 0x10 lkey 0x181c00
00:29:59.518  [2024-12-14 13:55:59.033053] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd338 length 0x10 lkey 0x181c00
00:29:59.518  [2024-12-14 13:55:59.033063] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd360 length 0x10 lkey 0x181c00
00:29:59.518  [2024-12-14 13:55:59.033072] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd388 length 0x10 lkey 0x181c00
00:29:59.518  [2024-12-14 13:55:59.033080] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd3b0 length 0x10 lkey 0x181c00
00:29:59.518  [2024-12-14 13:55:59.033090] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd3d8 length 0x10 lkey 0x181c00
00:29:59.518  [2024-12-14 13:55:59.033097] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd400 length 0x10 lkey 0x181c00
00:29:59.518  [2024-12-14 13:55:59.033109] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd428 length 0x10 lkey 0x181c00
00:29:59.518  [2024-12-14 13:55:59.033117] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd450 length 0x10 lkey 0x181c00
00:29:59.518  [2024-12-14 13:55:59.033126] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd478 length 0x10 lkey 0x181c00
00:29:59.518  [2024-12-14 13:55:59.033134] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd4a0 length 0x10 lkey 0x181c00
00:29:59.518  [2024-12-14 13:55:59.033143] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd4c8 length 0x10 lkey 0x181c00
00:29:59.518  [2024-12-14 13:55:59.033151] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd4f0 length 0x10 lkey 0x181c00
00:29:59.518  [2024-12-14 13:55:59.033167] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd518 length 0x10 lkey 0x181c00
00:29:59.518  [2024-12-14 13:55:59.033175] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd540 length 0x10 lkey 0x181c00
00:29:59.518  [2024-12-14 13:55:59.033185] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd568 length 0x10 lkey 0x181c00
00:29:59.518  [2024-12-14 13:55:59.033193] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd590 length 0x10 lkey 0x181c00
00:29:59.518  [2024-12-14 13:55:59.033204] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd5b8 length 0x10 lkey 0x181c00
00:29:59.518  [2024-12-14 13:55:59.033213] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd5e0 length 0x10 lkey 0x181c00
00:29:59.518  [2024-12-14 13:55:59.033223] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd608 length 0x10 lkey 0x181c00
00:29:59.518  [2024-12-14 13:55:59.033231] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd630 length 0x10 lkey 0x181c00
00:29:59.518  [2024-12-14 13:55:59.033240] nvme_rdma.c:1143:nvme_rdma_connect_established: *DEBUG*: RDMA responses created
00:29:59.518  [2024-12-14 13:55:59.033249] nvme_rdma.c:1146:nvme_rdma_connect_established: *DEBUG*: rc =0
00:29:59.518  [2024-12-14 13:55:59.033259] nvme_rdma.c:1151:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted
00:29:59.518  [2024-12-14 13:55:59.033289] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003cdfc0 length 0x40 lkey 0x181c00
00:29:59.518  [2024-12-14 13:55:59.033311] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cccc0 len:0x400 key:0x181c00
00:29:59.518  [2024-12-14 13:55:59.037938] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.518  [2024-12-14 13:55:59.037973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0
00:29:59.518  [2024-12-14 13:55:59.037986] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd180 length 0x10 lkey 0x181c00
00:29:59.518  [2024-12-14 13:55:59.038007] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001
00:29:59.518  [2024-12-14 13:55:59.038026] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout)
00:29:59.518  [2024-12-14 13:55:59.038038] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout)
00:29:59.518  [2024-12-14 13:55:59.038061] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003cdfc0 length 0x40 lkey 0x181c00
00:29:59.518  [2024-12-14 13:55:59.038077] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:29:59.518  [2024-12-14 13:55:59.038117] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.518  [2024-12-14 13:55:59.038129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0
00:29:59.518  [2024-12-14 13:55:59.038144] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout)
00:29:59.518  [2024-12-14 13:55:59.038156] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd1a8 length 0x10 lkey 0x181c00
00:29:59.518  [2024-12-14 13:55:59.038166] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout)
00:29:59.518  [2024-12-14 13:55:59.038182] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003cdfc0 length 0x40 lkey 0x181c00
00:29:59.518  [2024-12-14 13:55:59.038193] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:29:59.518  [2024-12-14 13:55:59.038223] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.518  [2024-12-14 13:55:59.038232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0
00:29:59.518  [2024-12-14 13:55:59.038244] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout)
00:29:59.518  [2024-12-14 13:55:59.038253] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd1d0 length 0x10 lkey 0x181c00
00:29:59.518  [2024-12-14 13:55:59.038265] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms)
00:29:59.518  [2024-12-14 13:55:59.038278] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003cdfc0 length 0x40 lkey 0x181c00
00:29:59.518  [2024-12-14 13:55:59.038292] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:29:59.518  [2024-12-14 13:55:59.038313] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.518  [2024-12-14 13:55:59.038324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0
00:29:59.518  [2024-12-14 13:55:59.038333] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms)
00:29:59.518  [2024-12-14 13:55:59.038344] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd1f8 length 0x10 lkey 0x181c00
00:29:59.518  [2024-12-14 13:55:59.038358] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003cdfc0 length 0x40 lkey 0x181c00
00:29:59.518  [2024-12-14 13:55:59.038372] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:29:59.518  [2024-12-14 13:55:59.038394] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.519  [2024-12-14 13:55:59.038407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0
00:29:59.519  [2024-12-14 13:55:59.038416] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0
00:29:59.519  [2024-12-14 13:55:59.038429] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms)
00:29:59.519  [2024-12-14 13:55:59.038438] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd220 length 0x10 lkey 0x181c00
00:29:59.519  [2024-12-14 13:55:59.038450] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms)
00:29:59.519  [2024-12-14 13:55:59.038562] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1
00:29:59.519  [2024-12-14 13:55:59.038572] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms)
00:29:59.519  [2024-12-14 13:55:59.038586] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003cdfc0 length 0x40 lkey 0x181c00
00:29:59.519  [2024-12-14 13:55:59.038602] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:29:59.519  [2024-12-14 13:55:59.038625] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.519  [2024-12-14 13:55:59.038635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0
00:29:59.519  [2024-12-14 13:55:59.038644] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms)
00:29:59.519  [2024-12-14 13:55:59.038655] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd248 length 0x10 lkey 0x181c00
00:29:59.519  [2024-12-14 13:55:59.038666] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003cdfc0 length 0x40 lkey 0x181c00
00:29:59.519  [2024-12-14 13:55:59.038682] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:29:59.519  [2024-12-14 13:55:59.038694] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.519  [2024-12-14 13:55:59.038707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0
00:29:59.519  [2024-12-14 13:55:59.038715] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready
00:29:59.519  [2024-12-14 13:55:59.038726] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms)
00:29:59.519  [2024-12-14 13:55:59.038737] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd270 length 0x10 lkey 0x181c00
00:29:59.519  [2024-12-14 13:55:59.038750] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout)
00:29:59.519  [2024-12-14 13:55:59.038767] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms)
00:29:59.519  [2024-12-14 13:55:59.038788] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003cdfc0 length 0x40 lkey 0x181c00
00:29:59.519  [2024-12-14 13:55:59.038803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x181c00
00:29:59.519  [2024-12-14 13:55:59.038871] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.519  [2024-12-14 13:55:59.038880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0
00:29:59.519  [2024-12-14 13:55:59.038901] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295
00:29:59.519  [2024-12-14 13:55:59.038910] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072
00:29:59.519  [2024-12-14 13:55:59.038921] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001
00:29:59.519  [2024-12-14 13:55:59.038939] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16
00:29:59.519  [2024-12-14 13:55:59.038950] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1
00:29:59.519  [2024-12-14 13:55:59.038959] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms)
00:29:59.519  [2024-12-14 13:55:59.038972] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd298 length 0x10 lkey 0x181c00
00:29:59.519  [2024-12-14 13:55:59.038987] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms)
00:29:59.519  [2024-12-14 13:55:59.039002] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003cdfc0 length 0x40 lkey 0x181c00
00:29:59.519  [2024-12-14 13:55:59.039016] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:29:59.519  [2024-12-14 13:55:59.039057] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.519  [2024-12-14 13:55:59.039066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0
00:29:59.519  [2024-12-14 13:55:59.039079] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce100 length 0x40 lkey 0x181c00
00:29:59.519  [2024-12-14 13:55:59.039090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:29:59.519  [2024-12-14 13:55:59.039102] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce240 length 0x40 lkey 0x181c00
00:29:59.519  [2024-12-14 13:55:59.039112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:29:59.519  [2024-12-14 13:55:59.039124] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x181c00
00:29:59.519  [2024-12-14 13:55:59.039133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:29:59.519  [2024-12-14 13:55:59.039145] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce4c0 length 0x40 lkey 0x181c00
00:29:59.519  [2024-12-14 13:55:59.039154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 
00:29:59.519  [2024-12-14 13:55:59.039164] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms)
00:29:59.519  [2024-12-14 13:55:59.039177] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd2c0 length 0x10 lkey 0x181c00
00:29:59.519  [2024-12-14 13:55:59.039194] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms)
00:29:59.519  [2024-12-14 13:55:59.039205] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003cdfc0 length 0x40 lkey 0x181c00
00:29:59.519  [2024-12-14 13:55:59.039220] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:29:59.519  [2024-12-14 13:55:59.039236] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.519  [2024-12-14 13:55:59.039247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0
00:29:59.519  [2024-12-14 13:55:59.039256] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us
00:29:59.519  [2024-12-14 13:55:59.039268] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout)
00:29:59.519  [2024-12-14 13:55:59.039278] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd2e8 length 0x10 lkey 0x181c00
00:29:59.519  [2024-12-14 13:55:59.039300] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003cdfc0 length 0x40 lkey 0x181c00
00:29:59.519  [2024-12-14 13:55:59.039314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x181c00
00:29:59.519  [2024-12-14 13:55:59.039352] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.519  [2024-12-14 13:55:59.039361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0
00:29:59.519  [2024-12-14 13:55:59.039381] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd310 length 0x10 lkey 0x181c00
00:29:59.519  [2024-12-14 13:55:59.039395] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state
00:29:59.519  [2024-12-14 13:55:59.039444] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003cdfc0 length 0x40 lkey 0x181c00
00:29:59.519  [2024-12-14 13:55:59.039457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x400 key:0x181c00
00:29:59.519  [2024-12-14 13:55:59.039472] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce600 length 0x40 lkey 0x181c00
00:29:59.519  [2024-12-14 13:55:59.039485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 
00:29:59.519  [2024-12-14 13:55:59.039525] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.519  [2024-12-14 13:55:59.039535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0
00:29:59.519  [2024-12-14 13:55:59.039561] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce740 length 0x40 lkey 0x181c00
00:29:59.519  [2024-12-14 13:55:59.039575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0xc00 key:0x181c00
00:29:59.519  [2024-12-14 13:55:59.039587] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd338 length 0x10 lkey 0x181c00
00:29:59.519  [2024-12-14 13:55:59.039596] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.519  [2024-12-14 13:55:59.039606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0
00:29:59.519  [2024-12-14 13:55:59.039614] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd360 length 0x10 lkey 0x181c00
00:29:59.519  [2024-12-14 13:55:59.039627] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.519  [2024-12-14 13:55:59.039634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0
00:29:59.519  [2024-12-14 13:55:59.039653] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce600 length 0x40 lkey 0x181c00
00:29:59.519  [2024-12-14 13:55:59.039664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cd000 len:0x8 key:0x181c00
00:29:59.519  [2024-12-14 13:55:59.039677] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd388 length 0x10 lkey 0x181c00
00:29:59.519  [2024-12-14 13:55:59.039700] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.519  [2024-12-14 13:55:59.039713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0
00:29:59.519  [2024-12-14 13:55:59.039731] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd3b0 length 0x10 lkey 0x181c00
00:29:59.519  =====================================================
00:29:59.519  NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery
00:29:59.519  =====================================================
00:29:59.519  Controller Capabilities/Features
00:29:59.519  ================================
00:29:59.519  Vendor ID:                             0000
00:29:59.519  Subsystem Vendor ID:                   0000
00:29:59.519  Serial Number:                         ....................
00:29:59.520  Model Number:                          ........................................
00:29:59.520  Firmware Version:                      25.01
00:29:59.520  Recommended Arb Burst:                 0
00:29:59.520  IEEE OUI Identifier:                   00 00 00
00:29:59.520  Multi-path I/O
00:29:59.520    May have multiple subsystem ports:   No
00:29:59.520    May have multiple controllers:       No
00:29:59.520    Associated with SR-IOV VF:           No
00:29:59.520  Max Data Transfer Size:                131072
00:29:59.520  Max Number of Namespaces:              0
00:29:59.520  Max Number of I/O Queues:              1024
00:29:59.520  NVMe Specification Version (VS):       1.3
00:29:59.520  NVMe Specification Version (Identify): 1.3
00:29:59.520  Maximum Queue Entries:                 128
00:29:59.520  Contiguous Queues Required:            Yes
00:29:59.520  Arbitration Mechanisms Supported
00:29:59.520    Weighted Round Robin:                Not Supported
00:29:59.520    Vendor Specific:                     Not Supported
00:29:59.520  Reset Timeout:                         15000 ms
00:29:59.520  Doorbell Stride:                       4 bytes
00:29:59.520  NVM Subsystem Reset:                   Not Supported
00:29:59.520  Command Sets Supported
00:29:59.520    NVM Command Set:                     Supported
00:29:59.520  Boot Partition:                        Not Supported
00:29:59.520  Memory Page Size Minimum:              4096 bytes
00:29:59.520  Memory Page Size Maximum:              4096 bytes
00:29:59.520  Persistent Memory Region:              Not Supported
00:29:59.520  Optional Asynchronous Events Supported
00:29:59.520    Namespace Attribute Notices:         Not Supported
00:29:59.520    Firmware Activation Notices:         Not Supported
00:29:59.520    ANA Change Notices:                  Not Supported
00:29:59.520    PLE Aggregate Log Change Notices:    Not Supported
00:29:59.520    LBA Status Info Alert Notices:       Not Supported
00:29:59.520    EGE Aggregate Log Change Notices:    Not Supported
00:29:59.520    Normal NVM Subsystem Shutdown event: Not Supported
00:29:59.520    Zone Descriptor Change Notices:      Not Supported
00:29:59.520    Discovery Log Change Notices:        Supported
00:29:59.520  Controller Attributes
00:29:59.520    128-bit Host Identifier:             Not Supported
00:29:59.520    Non-Operational Permissive Mode:     Not Supported
00:29:59.520    NVM Sets:                            Not Supported
00:29:59.520    Read Recovery Levels:                Not Supported
00:29:59.520    Endurance Groups:                    Not Supported
00:29:59.520    Predictable Latency Mode:            Not Supported
00:29:59.520    Traffic Based Keep ALive:            Not Supported
00:29:59.520    Namespace Granularity:               Not Supported
00:29:59.520    SQ Associations:                     Not Supported
00:29:59.520    UUID List:                           Not Supported
00:29:59.520    Multi-Domain Subsystem:              Not Supported
00:29:59.520    Fixed Capacity Management:           Not Supported
00:29:59.520    Variable Capacity Management:        Not Supported
00:29:59.520    Delete Endurance Group:              Not Supported
00:29:59.520    Delete NVM Set:                      Not Supported
00:29:59.520    Extended LBA Formats Supported:      Not Supported
00:29:59.520    Flexible Data Placement Supported:   Not Supported
00:29:59.520  
00:29:59.520  Controller Memory Buffer Support
00:29:59.520  ================================
00:29:59.520  Supported:                             No
00:29:59.520  
00:29:59.520  Persistent Memory Region Support
00:29:59.520  ================================
00:29:59.520  Supported:                             No
00:29:59.520  
00:29:59.520  Admin Command Set Attributes
00:29:59.520  ============================
00:29:59.520  Security Send/Receive:                 Not Supported
00:29:59.520  Format NVM:                            Not Supported
00:29:59.520  Firmware Activate/Download:            Not Supported
00:29:59.520  Namespace Management:                  Not Supported
00:29:59.520  Device Self-Test:                      Not Supported
00:29:59.520  Directives:                            Not Supported
00:29:59.520  NVMe-MI:                               Not Supported
00:29:59.520  Virtualization Management:             Not Supported
00:29:59.520  Doorbell Buffer Config:                Not Supported
00:29:59.520  Get LBA Status Capability:             Not Supported
00:29:59.520  Command & Feature Lockdown Capability: Not Supported
00:29:59.520  Abort Command Limit:                   1
00:29:59.520  Async Event Request Limit:             4
00:29:59.520  Number of Firmware Slots:              N/A
00:29:59.520  Firmware Slot 1 Read-Only:             N/A
00:29:59.520  Firmware Activation Without Reset:     N/A
00:29:59.520  Multiple Update Detection Support:     N/A
00:29:59.520  Firmware Update Granularity:           No Information Provided
00:29:59.520  Per-Namespace SMART Log:               No
00:29:59.520  Asymmetric Namespace Access Log Page:  Not Supported
00:29:59.520  Subsystem NQN:                         nqn.2014-08.org.nvmexpress.discovery
00:29:59.520  Command Effects Log Page:              Not Supported
00:29:59.520  Get Log Page Extended Data:            Supported
00:29:59.520  Telemetry Log Pages:                   Not Supported
00:29:59.520  Persistent Event Log Pages:            Not Supported
00:29:59.520  Supported Log Pages Log Page:          May Support
00:29:59.520  Commands Supported & Effects Log Page: Not Supported
00:29:59.520  Feature Identifiers & Effects Log Page:May Support
00:29:59.520  NVMe-MI Commands & Effects Log Page:   May Support
00:29:59.520  Data Area 4 for Telemetry Log:         Not Supported
00:29:59.520  Error Log Page Entries Supported:      128
00:29:59.520  Keep Alive:                            Not Supported
00:29:59.520  
00:29:59.520  NVM Command Set Attributes
00:29:59.520  ==========================
00:29:59.520  Submission Queue Entry Size
00:29:59.520    Max:                       1
00:29:59.520    Min:                       1
00:29:59.520  Completion Queue Entry Size
00:29:59.520    Max:                       1
00:29:59.520    Min:                       1
00:29:59.520  Number of Namespaces:        0
00:29:59.520  Compare Command:             Not Supported
00:29:59.520  Write Uncorrectable Command: Not Supported
00:29:59.520  Dataset Management Command:  Not Supported
00:29:59.520  Write Zeroes Command:        Not Supported
00:29:59.520  Set Features Save Field:     Not Supported
00:29:59.520  Reservations:                Not Supported
00:29:59.520  Timestamp:                   Not Supported
00:29:59.520  Copy:                        Not Supported
00:29:59.520  Volatile Write Cache:        Not Present
00:29:59.520  Atomic Write Unit (Normal):  1
00:29:59.520  Atomic Write Unit (PFail):   1
00:29:59.520  Atomic Compare & Write Unit: 1
00:29:59.520  Fused Compare & Write:       Supported
00:29:59.520  Scatter-Gather List
00:29:59.520    SGL Command Set:           Supported
00:29:59.520    SGL Keyed:                 Supported
00:29:59.520    SGL Bit Bucket Descriptor: Not Supported
00:29:59.520    SGL Metadata Pointer:      Not Supported
00:29:59.520    Oversized SGL:             Not Supported
00:29:59.520    SGL Metadata Address:      Not Supported
00:29:59.520    SGL Offset:                Supported
00:29:59.520    Transport SGL Data Block:  Not Supported
00:29:59.520  Replay Protected Memory Block:  Not Supported
00:29:59.520  
00:29:59.520  Firmware Slot Information
00:29:59.520  =========================
00:29:59.520  Active slot:                 0
00:29:59.520  
00:29:59.520  
00:29:59.520  Error Log
00:29:59.520  =========
00:29:59.520  
00:29:59.520  Active Namespaces
00:29:59.520  =================
00:29:59.520  Discovery Log Page
00:29:59.520  ==================
00:29:59.520  Generation Counter:                    2
00:29:59.520  Number of Records:                     2
00:29:59.520  Record Format:                         0
00:29:59.520  
00:29:59.520  Discovery Log Entry 0
00:29:59.520  ----------------------
00:29:59.520  Transport Type:                        1 (RDMA)
00:29:59.520  Address Family:                        1 (IPv4)
00:29:59.520  Subsystem Type:                        3 (Current Discovery Subsystem)
00:29:59.520  Entry Flags:
00:29:59.520    Duplicate Returned Information:			1
00:29:59.520    Explicit Persistent Connection Support for Discovery: 1
00:29:59.520  Transport Requirements:
00:29:59.520    Secure Channel:                      Not Required
00:29:59.520  Port ID:                               0 (0x0000)
00:29:59.520  Controller ID:                         65535 (0xffff)
00:29:59.520  Admin Max SQ Size:                     128
00:29:59.520  Transport Service Identifier:          4420                            
00:29:59.520  NVM Subsystem Qualified Name:          nqn.2014-08.org.nvmexpress.discovery
00:29:59.520  Transport Address:                     192.168.100.8                                                                                                                                                                                                                                                   
00:29:59.520  Transport Specific Address Subtype - RDMA
00:29:59.520    RDMA QP Service Type:                1 (Reliable Connected)
00:29:59.520    RDMA Provider Type:                  1 (No provider specified)
00:29:59.520    RDMA CM Service:                     1 (RDMA_CM)
00:29:59.520  Discovery Log Entry 1
00:29:59.520  ----------------------
00:29:59.520  Transport Type:                        1 (RDMA)
00:29:59.520  Address Family:                        1 (IPv4)
00:29:59.520  Subsystem Type:                        2 (NVM Subsystem)
00:29:59.520  Entry Flags:
00:29:59.520    Duplicate Returned Information:			0
00:29:59.520    Explicit Persistent Connection Support for Discovery: 0
00:29:59.520  Transport Requirements:
00:29:59.520    Secure Channel:                      Not Required
00:29:59.520  Port ID:                               0 (0x0000)
00:29:59.520  Controller ID:                         65535 (0xffff)
00:29:59.520  Admin Max SQ Size:             [2024-12-14 13:55:59.039845] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD
00:29:59.520  [2024-12-14 13:55:59.039862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:29:59.520  [2024-12-14 13:55:59.039875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:29:59.520  [2024-12-14 13:55:59.039890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:29:59.520  [2024-12-14 13:55:59.039901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:29:59.520  [2024-12-14 13:55:59.039914] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce4c0 length 0x40 lkey 0x181c00
00:29:59.520  [2024-12-14 13:55:59.039936] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:29:59.520  [2024-12-14 13:55:59.039955] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.520  [2024-12-14 13:55:59.039967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0
00:29:59.521  [2024-12-14 13:55:59.039982] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x181c00
00:29:59.521  [2024-12-14 13:55:59.039995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:29:59.521  [2024-12-14 13:55:59.040007] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd3d8 length 0x10 lkey 0x181c00
00:29:59.521  [2024-12-14 13:55:59.040025] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.521  [2024-12-14 13:55:59.040036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0
00:29:59.521  [2024-12-14 13:55:59.040053] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us
00:29:59.521  [2024-12-14 13:55:59.040062] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms
00:29:59.521  [2024-12-14 13:55:59.040073] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd400 length 0x10 lkey 0x181c00
00:29:59.521  [2024-12-14 13:55:59.040085] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x181c00
00:29:59.521  [2024-12-14 13:55:59.040098] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:29:59.521  [2024-12-14 13:55:59.040130] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.521  [2024-12-14 13:55:59.040141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0
00:29:59.521  [2024-12-14 13:55:59.040150] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd428 length 0x10 lkey 0x181c00
00:29:59.521  [2024-12-14 13:55:59.040165] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x181c00
00:29:59.521  [2024-12-14 13:55:59.040178] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:29:59.521  [2024-12-14 13:55:59.040205] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.521  [2024-12-14 13:55:59.040215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0
00:29:59.521  [2024-12-14 13:55:59.040225] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd450 length 0x10 lkey 0x181c00
00:29:59.521  [2024-12-14 13:55:59.040240] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x181c00
00:29:59.521  [2024-12-14 13:55:59.040253] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:29:59.521  [2024-12-14 13:55:59.040278] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.521  [2024-12-14 13:55:59.040292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0
00:29:59.521  [2024-12-14 13:55:59.040300] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd478 length 0x10 lkey 0x181c00
00:29:59.521  [2024-12-14 13:55:59.040314] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x181c00
00:29:59.521  [2024-12-14 13:55:59.040325] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:29:59.521  [2024-12-14 13:55:59.040350] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.521  [2024-12-14 13:55:59.040359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0
00:29:59.521  [2024-12-14 13:55:59.040369] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd4a0 length 0x10 lkey 0x181c00
00:29:59.521  [2024-12-14 13:55:59.040381] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x181c00
00:29:59.521  [2024-12-14 13:55:59.040394] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:29:59.521  [2024-12-14 13:55:59.040416] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.521  [2024-12-14 13:55:59.040428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0
00:29:59.521  [2024-12-14 13:55:59.040436] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd4c8 length 0x10 lkey 0x181c00
00:29:59.521  [2024-12-14 13:55:59.040450] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x181c00
00:29:59.521  [2024-12-14 13:55:59.040461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:29:59.521  [2024-12-14 13:55:59.040482] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.521  [2024-12-14 13:55:59.040490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0
00:29:59.521  [2024-12-14 13:55:59.040503] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd4f0 length 0x10 lkey 0x181c00
00:29:59.521  [2024-12-14 13:55:59.040515] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x181c00
00:29:59.521  [2024-12-14 13:55:59.040529] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:29:59.521  [2024-12-14 13:55:59.040544] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.521  [2024-12-14 13:55:59.040555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0
00:29:59.521  [2024-12-14 13:55:59.040564] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd518 length 0x10 lkey 0x181c00
00:29:59.521  [2024-12-14 13:55:59.040578] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x181c00
00:29:59.521  [2024-12-14 13:55:59.040590] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:29:59.521  [2024-12-14 13:55:59.040621] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.521  [2024-12-14 13:55:59.040630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0
00:29:59.521  [2024-12-14 13:55:59.040640] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd540 length 0x10 lkey 0x181c00
00:29:59.521  [2024-12-14 13:55:59.040652] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x181c00
00:29:59.521  [2024-12-14 13:55:59.040664] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:29:59.521  [2024-12-14 13:55:59.040681] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.521  [2024-12-14 13:55:59.040691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0
00:29:59.521  [2024-12-14 13:55:59.040700] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd568 length 0x10 lkey 0x181c00
00:29:59.521  [2024-12-14 13:55:59.040716] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x181c00
00:29:59.521  [2024-12-14 13:55:59.040729] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:29:59.521  [2024-12-14 13:55:59.040751] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.521  [2024-12-14 13:55:59.040759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0
00:29:59.521  [2024-12-14 13:55:59.040770] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd590 length 0x10 lkey 0x181c00
00:29:59.521  [2024-12-14 13:55:59.040786] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x181c00
00:29:59.521  [2024-12-14 13:55:59.040799] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:29:59.521  [2024-12-14 13:55:59.040815] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.521  [2024-12-14 13:55:59.040825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0
00:29:59.521  [2024-12-14 13:55:59.040834] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5b8 length 0x10 lkey 0x181c00
00:29:59.521  [2024-12-14 13:55:59.040848] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x181c00
00:29:59.521  [2024-12-14 13:55:59.040858] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:29:59.521  [2024-12-14 13:55:59.040880] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.521  [2024-12-14 13:55:59.040888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0
00:29:59.521  [2024-12-14 13:55:59.040899] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5e0 length 0x10 lkey 0x181c00
00:29:59.521  [2024-12-14 13:55:59.040910] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x181c00
00:29:59.521  [2024-12-14 13:55:59.040927] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:29:59.521  [2024-12-14 13:55:59.040952] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.521  [2024-12-14 13:55:59.040967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0
00:29:59.521  [2024-12-14 13:55:59.040975] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd608 length 0x10 lkey 0x181c00
00:29:59.521  [2024-12-14 13:55:59.040989] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x181c00
00:29:59.521  [2024-12-14 13:55:59.041000] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:29:59.521  [2024-12-14 13:55:59.041020] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.521  [2024-12-14 13:55:59.041028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0
00:29:59.521  [2024-12-14 13:55:59.041039] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd630 length 0x10 lkey 0x181c00
00:29:59.521  [2024-12-14 13:55:59.041051] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x181c00
00:29:59.521  [2024-12-14 13:55:59.041063] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:29:59.521  [2024-12-14 13:55:59.041085] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.521  [2024-12-14 13:55:59.041096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0
00:29:59.521  [2024-12-14 13:55:59.041104] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd180 length 0x10 lkey 0x181c00
00:29:59.521  [2024-12-14 13:55:59.041118] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x181c00
00:29:59.521  [2024-12-14 13:55:59.041131] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:29:59.521  [2024-12-14 13:55:59.041165] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.521  [2024-12-14 13:55:59.041174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0
00:29:59.521  [2024-12-14 13:55:59.041184] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd1a8 length 0x10 lkey 0x181c00
00:29:59.521  [2024-12-14 13:55:59.041196] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x181c00
00:29:59.521  [2024-12-14 13:55:59.041210] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:29:59.521  [2024-12-14 13:55:59.041226] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.521  [2024-12-14 13:55:59.041236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0
00:29:59.522  [2024-12-14 13:55:59.041245] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd1d0 length 0x10 lkey 0x181c00
00:29:59.522  [2024-12-14 13:55:59.041258] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x181c00
00:29:59.522  [2024-12-14 13:55:59.041271] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:29:59.522  [2024-12-14 13:55:59.041290] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.522  [2024-12-14 13:55:59.041298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0
00:29:59.522  [2024-12-14 13:55:59.041309] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd1f8 length 0x10 lkey 0x181c00
00:29:59.522  [2024-12-14 13:55:59.041323] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x181c00
00:29:59.522  [2024-12-14 13:55:59.041336] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:29:59.522  [2024-12-14 13:55:59.041355] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.522  [2024-12-14 13:55:59.041367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0
00:29:59.522  [2024-12-14 13:55:59.041376] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd220 length 0x10 lkey 0x181c00
00:29:59.522  [2024-12-14 13:55:59.041389] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x181c00
00:29:59.522  [2024-12-14 13:55:59.041401] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:29:59.522  [2024-12-14 13:55:59.041426] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.522  [2024-12-14 13:55:59.041435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0
00:29:59.522  [2024-12-14 13:55:59.041445] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd248 length 0x10 lkey 0x181c00
00:29:59.522  [2024-12-14 13:55:59.041457] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x181c00
00:29:59.522  [2024-12-14 13:55:59.041470] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:29:59.522  [2024-12-14 13:55:59.041490] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.522  [2024-12-14 13:55:59.041503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0
00:29:59.522  [2024-12-14 13:55:59.041511] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd270 length 0x10 lkey 0x181c00
00:29:59.522  [2024-12-14 13:55:59.041525] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x181c00
00:29:59.522  [2024-12-14 13:55:59.041536] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:29:59.522  [2024-12-14 13:55:59.041564] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.522  [2024-12-14 13:55:59.041572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0
00:29:59.522  [2024-12-14 13:55:59.041585] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd298 length 0x10 lkey 0x181c00
00:29:59.522  [2024-12-14 13:55:59.041597] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x181c00
00:29:59.522  [2024-12-14 13:55:59.041609] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:29:59.522  [2024-12-14 13:55:59.041628] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.522  [2024-12-14 13:55:59.041639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0
00:29:59.522  [2024-12-14 13:55:59.041653] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd2c0 length 0x10 lkey 0x181c00
00:29:59.522  [2024-12-14 13:55:59.041667] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x181c00
00:29:59.522  [2024-12-14 13:55:59.041680] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:29:59.522  [2024-12-14 13:55:59.041702] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.522  [2024-12-14 13:55:59.041710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0
00:29:59.522  [2024-12-14 13:55:59.041720] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd2e8 length 0x10 lkey 0x181c00
00:29:59.522  [2024-12-14 13:55:59.041732] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x181c00
00:29:59.522  [2024-12-14 13:55:59.041745] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:29:59.522  [2024-12-14 13:55:59.041763] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.522  [2024-12-14 13:55:59.041773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0
00:29:59.522  [2024-12-14 13:55:59.041782] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd310 length 0x10 lkey 0x181c00
00:29:59.522  [2024-12-14 13:55:59.041797] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x181c00
00:29:59.522  [2024-12-14 13:55:59.041814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:29:59.522  [2024-12-14 13:55:59.041830] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.522  [2024-12-14 13:55:59.041838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0
00:29:59.522  [2024-12-14 13:55:59.041850] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd338 length 0x10 lkey 0x181c00
00:29:59.522  [2024-12-14 13:55:59.041864] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x181c00
00:29:59.522  [2024-12-14 13:55:59.041877] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:29:59.522  [2024-12-14 13:55:59.041901] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.522  [2024-12-14 13:55:59.041911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0
00:29:59.522  [2024-12-14 13:55:59.041919] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd360 length 0x10 lkey 0x181c00
00:29:59.522  [2024-12-14 13:55:59.045948] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x181c00
00:29:59.522  [2024-12-14 13:55:59.045969] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:29:59.522  [2024-12-14 13:55:59.046011] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.522  [2024-12-14 13:55:59.046021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:000d p:0 m:0 dnr:0
00:29:59.522  [2024-12-14 13:55:59.046033] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd388 length 0x10 lkey 0x181c00
00:29:59.522  [2024-12-14 13:55:59.046044] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds
00:29:59.522          128
00:29:59.522  Transport Service Identifier:          4420                            
00:29:59.522  NVM Subsystem Qualified Name:          nqn.2016-06.io.spdk:cnode1
00:29:59.522  Transport Address:                     192.168.100.8                                                                                                                                                                                                                                                   
00:29:59.522  Transport Specific Address Subtype - RDMA
00:29:59.522    RDMA QP Service Type:                1 (Reliable Connected)
00:29:59.522    RDMA Provider Type:                  1 (No provider specified)
00:29:59.522    RDMA CM Service:                     1 (RDMA_CM)
00:29:59.522   13:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r '        trtype:rdma         adrfam:IPv4         traddr:192.168.100.8         trsvcid:4420         subnqn:nqn.2016-06.io.spdk:cnode1' -L all
00:29:59.522  [2024-12-14 13:55:59.209024] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:29:59.522  [2024-12-14 13:55:59.209096] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3460729 ]
00:29:59.782  [2024-12-14 13:55:59.291008] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout)
00:29:59.782  [2024-12-14 13:55:59.291088] nvme_rdma.c:2017:nvme_rdma_ctrlr_create_qpair: *DEBUG*: rqpair 0x2000003d6ec0, append_copy diabled
00:29:59.782  [2024-12-14 13:55:59.291123] nvme_rdma.c:2460:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr
00:29:59.782  [2024-12-14 13:55:59.291149] nvme_rdma.c:1238:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2
00:29:59.782  [2024-12-14 13:55:59.291162] nvme_rdma.c:1242:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420
00:29:59.782  [2024-12-14 13:55:59.291205] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout)
00:29:59.782  [2024-12-14 13:55:59.301433] nvme_rdma.c: 459:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32.
00:29:59.782  [2024-12-14 13:55:59.316255] nvme_rdma.c:1124:nvme_rdma_connect_established: *DEBUG*: rc =0
00:29:59.782  [2024-12-14 13:55:59.316277] nvme_rdma.c:1129:nvme_rdma_connect_established: *DEBUG*: RDMA requests created
00:29:59.782  [2024-12-14 13:55:59.316295] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd180 length 0x10 lkey 0x181c00
00:29:59.782  [2024-12-14 13:55:59.316310] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd1a8 length 0x10 lkey 0x181c00
00:29:59.782  [2024-12-14 13:55:59.316319] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd1d0 length 0x10 lkey 0x181c00
00:29:59.782  [2024-12-14 13:55:59.316329] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd1f8 length 0x10 lkey 0x181c00
00:29:59.782  [2024-12-14 13:55:59.316338] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd220 length 0x10 lkey 0x181c00
00:29:59.782  [2024-12-14 13:55:59.316349] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd248 length 0x10 lkey 0x181c00
00:29:59.782  [2024-12-14 13:55:59.316358] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd270 length 0x10 lkey 0x181c00
00:29:59.782  [2024-12-14 13:55:59.316368] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd298 length 0x10 lkey 0x181c00
00:29:59.782  [2024-12-14 13:55:59.316376] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd2c0 length 0x10 lkey 0x181c00
00:29:59.782  [2024-12-14 13:55:59.316388] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd2e8 length 0x10 lkey 0x181c00
00:29:59.782  [2024-12-14 13:55:59.316396] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd310 length 0x10 lkey 0x181c00
00:29:59.782  [2024-12-14 13:55:59.316406] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd338 length 0x10 lkey 0x181c00
00:29:59.783  [2024-12-14 13:55:59.316416] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd360 length 0x10 lkey 0x181c00
00:29:59.783  [2024-12-14 13:55:59.316426] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd388 length 0x10 lkey 0x181c00
00:29:59.783  [2024-12-14 13:55:59.316434] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd3b0 length 0x10 lkey 0x181c00
00:29:59.783  [2024-12-14 13:55:59.316444] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd3d8 length 0x10 lkey 0x181c00
00:29:59.783  [2024-12-14 13:55:59.316453] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd400 length 0x10 lkey 0x181c00
00:29:59.783  [2024-12-14 13:55:59.316464] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd428 length 0x10 lkey 0x181c00
00:29:59.783  [2024-12-14 13:55:59.316472] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd450 length 0x10 lkey 0x181c00
00:29:59.783  [2024-12-14 13:55:59.316482] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd478 length 0x10 lkey 0x181c00
00:29:59.783  [2024-12-14 13:55:59.316490] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd4a0 length 0x10 lkey 0x181c00
00:29:59.783  [2024-12-14 13:55:59.316500] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd4c8 length 0x10 lkey 0x181c00
00:29:59.783  [2024-12-14 13:55:59.316509] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd4f0 length 0x10 lkey 0x181c00
00:29:59.783  [2024-12-14 13:55:59.316525] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd518 length 0x10 lkey 0x181c00
00:29:59.783  [2024-12-14 13:55:59.316534] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd540 length 0x10 lkey 0x181c00
00:29:59.783  [2024-12-14 13:55:59.316543] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd568 length 0x10 lkey 0x181c00
00:29:59.783  [2024-12-14 13:55:59.316552] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd590 length 0x10 lkey 0x181c00
00:29:59.783  [2024-12-14 13:55:59.316564] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd5b8 length 0x10 lkey 0x181c00
00:29:59.783  [2024-12-14 13:55:59.316574] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd5e0 length 0x10 lkey 0x181c00
00:29:59.783  [2024-12-14 13:55:59.316585] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd608 length 0x10 lkey 0x181c00
00:29:59.783  [2024-12-14 13:55:59.316594] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd630 length 0x10 lkey 0x181c00
00:29:59.783  [2024-12-14 13:55:59.316603] nvme_rdma.c:1143:nvme_rdma_connect_established: *DEBUG*: RDMA responses created
00:29:59.783  [2024-12-14 13:55:59.316612] nvme_rdma.c:1146:nvme_rdma_connect_established: *DEBUG*: rc =0
00:29:59.783  [2024-12-14 13:55:59.316622] nvme_rdma.c:1151:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted
00:29:59.783  [2024-12-14 13:55:59.316648] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003cdfc0 length 0x40 lkey 0x181c00
00:29:59.783  [2024-12-14 13:55:59.316669] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cccc0 len:0x400 key:0x181c00
00:29:59.783  [2024-12-14 13:55:59.321941] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.783  [2024-12-14 13:55:59.321972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0
00:29:59.783  [2024-12-14 13:55:59.321991] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd180 length 0x10 lkey 0x181c00
00:29:59.783  [2024-12-14 13:55:59.322006] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001
00:29:59.783  [2024-12-14 13:55:59.322020] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout)
00:29:59.783  [2024-12-14 13:55:59.322033] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout)
00:29:59.783  [2024-12-14 13:55:59.322056] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003cdfc0 length 0x40 lkey 0x181c00
00:29:59.783  [2024-12-14 13:55:59.322072] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:29:59.783  [2024-12-14 13:55:59.322099] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.783  [2024-12-14 13:55:59.322111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0
00:29:59.783  [2024-12-14 13:55:59.322125] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout)
00:29:59.783  [2024-12-14 13:55:59.322137] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd1a8 length 0x10 lkey 0x181c00
00:29:59.783  [2024-12-14 13:55:59.322147] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout)
00:29:59.783  [2024-12-14 13:55:59.322165] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003cdfc0 length 0x40 lkey 0x181c00
00:29:59.783  [2024-12-14 13:55:59.322176] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:29:59.783  [2024-12-14 13:55:59.322205] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.783  [2024-12-14 13:55:59.322214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0
00:29:59.783  [2024-12-14 13:55:59.322226] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout)
00:29:59.783  [2024-12-14 13:55:59.322235] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd1d0 length 0x10 lkey 0x181c00
00:29:59.783  [2024-12-14 13:55:59.322248] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms)
00:29:59.783  [2024-12-14 13:55:59.322261] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003cdfc0 length 0x40 lkey 0x181c00
00:29:59.783  [2024-12-14 13:55:59.322278] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:29:59.783  [2024-12-14 13:55:59.322292] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.783  [2024-12-14 13:55:59.322303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0
00:29:59.783  [2024-12-14 13:55:59.322313] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms)
00:29:59.783  [2024-12-14 13:55:59.322324] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd1f8 length 0x10 lkey 0x181c00
00:29:59.783  [2024-12-14 13:55:59.322336] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003cdfc0 length 0x40 lkey 0x181c00
00:29:59.783  [2024-12-14 13:55:59.322350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:29:59.783  [2024-12-14 13:55:59.322368] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.783  [2024-12-14 13:55:59.322380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0
00:29:59.783  [2024-12-14 13:55:59.322389] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0
00:29:59.783  [2024-12-14 13:55:59.322401] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms)
00:29:59.783  [2024-12-14 13:55:59.322410] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd220 length 0x10 lkey 0x181c00
00:29:59.783  [2024-12-14 13:55:59.322424] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms)
00:29:59.783  [2024-12-14 13:55:59.322536] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1
00:29:59.783  [2024-12-14 13:55:59.322547] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms)
00:29:59.783  [2024-12-14 13:55:59.322561] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003cdfc0 length 0x40 lkey 0x181c00
00:29:59.783  [2024-12-14 13:55:59.322574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:29:59.783  [2024-12-14 13:55:59.322597] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.783  [2024-12-14 13:55:59.322608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0
00:29:59.783  [2024-12-14 13:55:59.322617] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms)
00:29:59.783  [2024-12-14 13:55:59.322629] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd248 length 0x10 lkey 0x181c00
00:29:59.783  [2024-12-14 13:55:59.322640] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003cdfc0 length 0x40 lkey 0x181c00
00:29:59.783  [2024-12-14 13:55:59.322656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:29:59.783  [2024-12-14 13:55:59.322674] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.783  [2024-12-14 13:55:59.322687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0
00:29:59.783  [2024-12-14 13:55:59.322695] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready
00:29:59.783  [2024-12-14 13:55:59.322707] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms)
00:29:59.783  [2024-12-14 13:55:59.322721] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd270 length 0x10 lkey 0x181c00
00:29:59.783  [2024-12-14 13:55:59.322733] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout)
00:29:59.783  [2024-12-14 13:55:59.322749] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms)
00:29:59.783  [2024-12-14 13:55:59.322769] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003cdfc0 length 0x40 lkey 0x181c00
00:29:59.783  [2024-12-14 13:55:59.322782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x181c00
00:29:59.783  [2024-12-14 13:55:59.322844] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.783  [2024-12-14 13:55:59.322853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0
00:29:59.783  [2024-12-14 13:55:59.322873] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295
00:29:59.783  [2024-12-14 13:55:59.322883] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072
00:29:59.783  [2024-12-14 13:55:59.322894] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001
00:29:59.783  [2024-12-14 13:55:59.322905] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16
00:29:59.783  [2024-12-14 13:55:59.322916] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1
00:29:59.783  [2024-12-14 13:55:59.322925] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms)
00:29:59.783  [2024-12-14 13:55:59.322945] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd298 length 0x10 lkey 0x181c00
00:29:59.783  [2024-12-14 13:55:59.322964] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms)
00:29:59.783  [2024-12-14 13:55:59.322979] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003cdfc0 length 0x40 lkey 0x181c00
00:29:59.783  [2024-12-14 13:55:59.322991] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:29:59.784  [2024-12-14 13:55:59.323021] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.784  [2024-12-14 13:55:59.323030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0
00:29:59.784  [2024-12-14 13:55:59.323044] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce100 length 0x40 lkey 0x181c00
00:29:59.784  [2024-12-14 13:55:59.323056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:29:59.784  [2024-12-14 13:55:59.323068] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce240 length 0x40 lkey 0x181c00
00:29:59.784  [2024-12-14 13:55:59.323078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:29:59.784  [2024-12-14 13:55:59.323090] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x181c00
00:29:59.784  [2024-12-14 13:55:59.323100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:29:59.784  [2024-12-14 13:55:59.323111] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce4c0 length 0x40 lkey 0x181c00
00:29:59.784  [2024-12-14 13:55:59.323121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 
00:29:59.784  [2024-12-14 13:55:59.323132] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms)
00:29:59.784  [2024-12-14 13:55:59.323146] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd2c0 length 0x10 lkey 0x181c00
00:29:59.784  [2024-12-14 13:55:59.323165] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms)
00:29:59.784  [2024-12-14 13:55:59.323176] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003cdfc0 length 0x40 lkey 0x181c00
00:29:59.784  [2024-12-14 13:55:59.323190] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:29:59.784  [2024-12-14 13:55:59.323218] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.784  [2024-12-14 13:55:59.323229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0
00:29:59.784  [2024-12-14 13:55:59.323238] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us
00:29:59.784  [2024-12-14 13:55:59.323249] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms)
00:29:59.784  [2024-12-14 13:55:59.323258] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd2e8 length 0x10 lkey 0x181c00
00:29:59.784  [2024-12-14 13:55:59.323270] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms)
00:29:59.784  [2024-12-14 13:55:59.323280] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms)
00:29:59.784  [2024-12-14 13:55:59.323295] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003cdfc0 length 0x40 lkey 0x181c00
00:29:59.784  [2024-12-14 13:55:59.323308] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:29:59.784  [2024-12-14 13:55:59.323336] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.784  [2024-12-14 13:55:59.323345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0
00:29:59.784  [2024-12-14 13:55:59.323424] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms)
00:29:59.784  [2024-12-14 13:55:59.323434] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd310 length 0x10 lkey 0x181c00
00:29:59.784  [2024-12-14 13:55:59.323454] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms)
00:29:59.784  [2024-12-14 13:55:59.323470] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003cdfc0 length 0x40 lkey 0x181c00
00:29:59.784  [2024-12-14 13:55:59.323484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c9000 len:0x1000 key:0x181c00
00:29:59.784  [2024-12-14 13:55:59.323524] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.784  [2024-12-14 13:55:59.323535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0
00:29:59.784  [2024-12-14 13:55:59.323562] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added
00:29:59.784  [2024-12-14 13:55:59.323587] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms)
00:29:59.784  [2024-12-14 13:55:59.323596] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd338 length 0x10 lkey 0x181c00
00:29:59.784  [2024-12-14 13:55:59.323610] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms)
00:29:59.784  [2024-12-14 13:55:59.323631] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003cdfc0 length 0x40 lkey 0x181c00
00:29:59.784  [2024-12-14 13:55:59.323645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cb000 len:0x1000 key:0x181c00
00:29:59.784  [2024-12-14 13:55:59.323710] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.784  [2024-12-14 13:55:59.323721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0
00:29:59.784  [2024-12-14 13:55:59.323742] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms)
00:29:59.784  [2024-12-14 13:55:59.323754] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd360 length 0x10 lkey 0x181c00
00:29:59.784  [2024-12-14 13:55:59.323766] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms)
00:29:59.784  [2024-12-14 13:55:59.323786] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003cdfc0 length 0x40 lkey 0x181c00
00:29:59.784  [2024-12-14 13:55:59.323798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cb000 len:0x1000 key:0x181c00
00:29:59.784  [2024-12-14 13:55:59.323836] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.784  [2024-12-14 13:55:59.323844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0
00:29:59.784  [2024-12-14 13:55:59.323863] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms)
00:29:59.784  [2024-12-14 13:55:59.323873] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd388 length 0x10 lkey 0x181c00
00:29:59.784  [2024-12-14 13:55:59.323893] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms)
00:29:59.784  [2024-12-14 13:55:59.323905] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms)
00:29:59.784  [2024-12-14 13:55:59.323918] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms)
00:29:59.784  [2024-12-14 13:55:59.323927] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms)
00:29:59.784  [2024-12-14 13:55:59.323947] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms)
00:29:59.784  [2024-12-14 13:55:59.323957] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID
00:29:59.784  [2024-12-14 13:55:59.323968] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms)
00:29:59.784  [2024-12-14 13:55:59.323977] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout)
00:29:59.784  [2024-12-14 13:55:59.324012] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003cdfc0 length 0x40 lkey 0x181c00
00:29:59.784  [2024-12-14 13:55:59.324025] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:29:59.784  [2024-12-14 13:55:59.324040] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce600 length 0x40 lkey 0x181c00
00:29:59.784  [2024-12-14 13:55:59.324051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 
00:29:59.784  [2024-12-14 13:55:59.324070] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.784  [2024-12-14 13:55:59.324086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0
00:29:59.784  [2024-12-14 13:55:59.324097] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd3b0 length 0x10 lkey 0x181c00
00:29:59.784  [2024-12-14 13:55:59.324109] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.784  [2024-12-14 13:55:59.324119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0
00:29:59.784  [2024-12-14 13:55:59.324128] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd3d8 length 0x10 lkey 0x181c00
00:29:59.784  [2024-12-14 13:55:59.324142] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce600 length 0x40 lkey 0x181c00
00:29:59.784  [2024-12-14 13:55:59.324153] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:29:59.784  [2024-12-14 13:55:59.324179] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.784  [2024-12-14 13:55:59.324187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0
00:29:59.784  [2024-12-14 13:55:59.324200] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd400 length 0x10 lkey 0x181c00
00:29:59.784  [2024-12-14 13:55:59.324212] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce600 length 0x40 lkey 0x181c00
00:29:59.784  [2024-12-14 13:55:59.324225] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:29:59.784  [2024-12-14 13:55:59.324255] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.784  [2024-12-14 13:55:59.324267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0
00:29:59.784  [2024-12-14 13:55:59.324276] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd428 length 0x10 lkey 0x181c00
00:29:59.784  [2024-12-14 13:55:59.324290] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce600 length 0x40 lkey 0x181c00
00:29:59.784  [2024-12-14 13:55:59.324302] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:29:59.784  [2024-12-14 13:55:59.324324] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.784  [2024-12-14 13:55:59.324332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0
00:29:59.784  [2024-12-14 13:55:59.324343] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd450 length 0x10 lkey 0x181c00
00:29:59.784  [2024-12-14 13:55:59.324366] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce600 length 0x40 lkey 0x181c00
00:29:59.784  [2024-12-14 13:55:59.324380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c6000 len:0x2000 key:0x181c00
00:29:59.785  [2024-12-14 13:55:59.324395] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003cdfc0 length 0x40 lkey 0x181c00
00:29:59.785  [2024-12-14 13:55:59.324410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x200 key:0x181c00
00:29:59.785  [2024-12-14 13:55:59.324423] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce740 length 0x40 lkey 0x181c00
00:29:59.785  [2024-12-14 13:55:59.324438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cb000 len:0x200 key:0x181c00
00:29:59.785  [2024-12-14 13:55:59.324456] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce880 length 0x40 lkey 0x181c00
00:29:59.785  [2024-12-14 13:55:59.324470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c4000 len:0x1000 key:0x181c00
00:29:59.785  [2024-12-14 13:55:59.324485] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.785  [2024-12-14 13:55:59.324501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0
00:29:59.785  [2024-12-14 13:55:59.324524] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd478 length 0x10 lkey 0x181c00
00:29:59.785  [2024-12-14 13:55:59.324538] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.785  [2024-12-14 13:55:59.324546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0
00:29:59.785  [2024-12-14 13:55:59.324561] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd4a0 length 0x10 lkey 0x181c00
00:29:59.785  [2024-12-14 13:55:59.324570] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.785  [2024-12-14 13:55:59.324580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0
00:29:59.785  [2024-12-14 13:55:59.324590] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd4c8 length 0x10 lkey 0x181c00
00:29:59.785  [2024-12-14 13:55:59.324603] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.785  [2024-12-14 13:55:59.324611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0
00:29:59.785  [2024-12-14 13:55:59.324629] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd4f0 length 0x10 lkey 0x181c00
00:29:59.785  =====================================================
00:29:59.785  NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1
00:29:59.785  =====================================================
00:29:59.785  Controller Capabilities/Features
00:29:59.785  ================================
00:29:59.785  Vendor ID:                             8086
00:29:59.785  Subsystem Vendor ID:                   8086
00:29:59.785  Serial Number:                         SPDK00000000000001
00:29:59.785  Model Number:                          SPDK bdev Controller
00:29:59.785  Firmware Version:                      25.01
00:29:59.785  Recommended Arb Burst:                 6
00:29:59.785  IEEE OUI Identifier:                   e4 d2 5c
00:29:59.785  Multi-path I/O
00:29:59.785    May have multiple subsystem ports:   Yes
00:29:59.785    May have multiple controllers:       Yes
00:29:59.785    Associated with SR-IOV VF:           No
00:29:59.785  Max Data Transfer Size:                131072
00:29:59.785  Max Number of Namespaces:              32
00:29:59.785  Max Number of I/O Queues:              127
00:29:59.785  NVMe Specification Version (VS):       1.3
00:29:59.785  NVMe Specification Version (Identify): 1.3
00:29:59.785  Maximum Queue Entries:                 128
00:29:59.785  Contiguous Queues Required:            Yes
00:29:59.785  Arbitration Mechanisms Supported
00:29:59.785    Weighted Round Robin:                Not Supported
00:29:59.785    Vendor Specific:                     Not Supported
00:29:59.785  Reset Timeout:                         15000 ms
00:29:59.785  Doorbell Stride:                       4 bytes
00:29:59.785  NVM Subsystem Reset:                   Not Supported
00:29:59.785  Command Sets Supported
00:29:59.785    NVM Command Set:                     Supported
00:29:59.785  Boot Partition:                        Not Supported
00:29:59.785  Memory Page Size Minimum:              4096 bytes
00:29:59.785  Memory Page Size Maximum:              4096 bytes
00:29:59.785  Persistent Memory Region:              Not Supported
00:29:59.785  Optional Asynchronous Events Supported
00:29:59.785    Namespace Attribute Notices:         Supported
00:29:59.785    Firmware Activation Notices:         Not Supported
00:29:59.785    ANA Change Notices:                  Not Supported
00:29:59.785    PLE Aggregate Log Change Notices:    Not Supported
00:29:59.785    LBA Status Info Alert Notices:       Not Supported
00:29:59.785    EGE Aggregate Log Change Notices:    Not Supported
00:29:59.785    Normal NVM Subsystem Shutdown event: Not Supported
00:29:59.785    Zone Descriptor Change Notices:      Not Supported
00:29:59.785    Discovery Log Change Notices:        Not Supported
00:29:59.785  Controller Attributes
00:29:59.785    128-bit Host Identifier:             Supported
00:29:59.785    Non-Operational Permissive Mode:     Not Supported
00:29:59.785    NVM Sets:                            Not Supported
00:29:59.785    Read Recovery Levels:                Not Supported
00:29:59.785    Endurance Groups:                    Not Supported
00:29:59.785    Predictable Latency Mode:            Not Supported
00:29:59.785    Traffic Based Keep ALive:            Not Supported
00:29:59.785    Namespace Granularity:               Not Supported
00:29:59.785    SQ Associations:                     Not Supported
00:29:59.785    UUID List:                           Not Supported
00:29:59.785    Multi-Domain Subsystem:              Not Supported
00:29:59.785    Fixed Capacity Management:           Not Supported
00:29:59.785    Variable Capacity Management:        Not Supported
00:29:59.785    Delete Endurance Group:              Not Supported
00:29:59.785    Delete NVM Set:                      Not Supported
00:29:59.785    Extended LBA Formats Supported:      Not Supported
00:29:59.785    Flexible Data Placement Supported:   Not Supported
00:29:59.785  
00:29:59.785  Controller Memory Buffer Support
00:29:59.785  ================================
00:29:59.785  Supported:                             No
00:29:59.785  
00:29:59.785  Persistent Memory Region Support
00:29:59.785  ================================
00:29:59.785  Supported:                             No
00:29:59.785  
00:29:59.785  Admin Command Set Attributes
00:29:59.785  ============================
00:29:59.785  Security Send/Receive:                 Not Supported
00:29:59.785  Format NVM:                            Not Supported
00:29:59.785  Firmware Activate/Download:            Not Supported
00:29:59.785  Namespace Management:                  Not Supported
00:29:59.785  Device Self-Test:                      Not Supported
00:29:59.785  Directives:                            Not Supported
00:29:59.785  NVMe-MI:                               Not Supported
00:29:59.785  Virtualization Management:             Not Supported
00:29:59.785  Doorbell Buffer Config:                Not Supported
00:29:59.785  Get LBA Status Capability:             Not Supported
00:29:59.785  Command & Feature Lockdown Capability: Not Supported
00:29:59.785  Abort Command Limit:                   4
00:29:59.785  Async Event Request Limit:             4
00:29:59.785  Number of Firmware Slots:              N/A
00:29:59.785  Firmware Slot 1 Read-Only:             N/A
00:29:59.785  Firmware Activation Without Reset:     N/A
00:29:59.785  Multiple Update Detection Support:     N/A
00:29:59.785  Firmware Update Granularity:           No Information Provided
00:29:59.785  Per-Namespace SMART Log:               No
00:29:59.785  Asymmetric Namespace Access Log Page:  Not Supported
00:29:59.785  Subsystem NQN:                         nqn.2016-06.io.spdk:cnode1
00:29:59.785  Command Effects Log Page:              Supported
00:29:59.785  Get Log Page Extended Data:            Supported
00:29:59.785  Telemetry Log Pages:                   Not Supported
00:29:59.785  Persistent Event Log Pages:            Not Supported
00:29:59.785  Supported Log Pages Log Page:          May Support
00:29:59.785  Commands Supported & Effects Log Page: Not Supported
00:29:59.785  Feature Identifiers & Effects Log Page:May Support
00:29:59.785  NVMe-MI Commands & Effects Log Page:   May Support
00:29:59.785  Data Area 4 for Telemetry Log:         Not Supported
00:29:59.785  Error Log Page Entries Supported:      128
00:29:59.785  Keep Alive:                            Supported
00:29:59.785  Keep Alive Granularity:                10000 ms
00:29:59.785  
00:29:59.785  NVM Command Set Attributes
00:29:59.785  ==========================
00:29:59.785  Submission Queue Entry Size
00:29:59.785    Max:                       64
00:29:59.785    Min:                       64
00:29:59.785  Completion Queue Entry Size
00:29:59.785    Max:                       16
00:29:59.785    Min:                       16
00:29:59.785  Number of Namespaces:        32
00:29:59.785  Compare Command:             Supported
00:29:59.785  Write Uncorrectable Command: Not Supported
00:29:59.785  Dataset Management Command:  Supported
00:29:59.785  Write Zeroes Command:        Supported
00:29:59.785  Set Features Save Field:     Not Supported
00:29:59.785  Reservations:                Supported
00:29:59.785  Timestamp:                   Not Supported
00:29:59.785  Copy:                        Supported
00:29:59.785  Volatile Write Cache:        Present
00:29:59.785  Atomic Write Unit (Normal):  1
00:29:59.785  Atomic Write Unit (PFail):   1
00:29:59.785  Atomic Compare & Write Unit: 1
00:29:59.785  Fused Compare & Write:       Supported
00:29:59.785  Scatter-Gather List
00:29:59.785    SGL Command Set:           Supported
00:29:59.785    SGL Keyed:                 Supported
00:29:59.785    SGL Bit Bucket Descriptor: Not Supported
00:29:59.785    SGL Metadata Pointer:      Not Supported
00:29:59.785    Oversized SGL:             Not Supported
00:29:59.785    SGL Metadata Address:      Not Supported
00:29:59.785    SGL Offset:                Supported
00:29:59.785    Transport SGL Data Block:  Not Supported
00:29:59.785  Replay Protected Memory Block:  Not Supported
00:29:59.785  
00:29:59.785  Firmware Slot Information
00:29:59.785  =========================
00:29:59.785  Active slot:                 1
00:29:59.785  Slot 1 Firmware Revision:    25.01
00:29:59.785  
00:29:59.785  
00:29:59.785  Commands Supported and Effects
00:29:59.785  ==============================
00:29:59.785  Admin Commands
00:29:59.785  --------------
00:29:59.785                    Get Log Page (02h): Supported 
00:29:59.785                        Identify (06h): Supported 
00:29:59.785                           Abort (08h): Supported 
00:29:59.785                    Set Features (09h): Supported 
00:29:59.785                    Get Features (0Ah): Supported 
00:29:59.785      Asynchronous Event Request (0Ch): Supported 
00:29:59.785                      Keep Alive (18h): Supported 
00:29:59.785  I/O Commands
00:29:59.785  ------------
00:29:59.785                           Flush (00h): Supported LBA-Change 
00:29:59.785                           Write (01h): Supported LBA-Change 
00:29:59.785                            Read (02h): Supported 
00:29:59.785                         Compare (05h): Supported 
00:29:59.785                    Write Zeroes (08h): Supported LBA-Change 
00:29:59.785              Dataset Management (09h): Supported LBA-Change 
00:29:59.786                            Copy (19h): Supported LBA-Change 
00:29:59.786  
00:29:59.786  Error Log
00:29:59.786  =========
00:29:59.786  
00:29:59.786  Arbitration
00:29:59.786  ===========
00:29:59.786  Arbitration Burst:           1
00:29:59.786  
00:29:59.786  Power Management
00:29:59.786  ================
00:29:59.786  Number of Power States:          1
00:29:59.786  Current Power State:             Power State #0
00:29:59.786  Power State #0:
00:29:59.786    Max Power:                      0.00 W
00:29:59.786    Non-Operational State:         Operational
00:29:59.786    Entry Latency:                 Not Reported
00:29:59.786    Exit Latency:                  Not Reported
00:29:59.786    Relative Read Throughput:      0
00:29:59.786    Relative Read Latency:         0
00:29:59.786    Relative Write Throughput:     0
00:29:59.786    Relative Write Latency:        0
00:29:59.786    Idle Power:                     Not Reported
00:29:59.786    Active Power:                   Not Reported
00:29:59.786  Non-Operational Permissive Mode: Not Supported
00:29:59.786  
00:29:59.786  Health Information
00:29:59.786  ==================
00:29:59.786  Critical Warnings:
00:29:59.786    Available Spare Space:     OK
00:29:59.786    Temperature:               OK
00:29:59.786    Device Reliability:        OK
00:29:59.786    Read Only:                 No
00:29:59.786    Volatile Memory Backup:    OK
00:29:59.786  Current Temperature:         0 Kelvin (-273 Celsius)
00:29:59.786  Temperature Threshold:       0 Kelvin (-273 Celsius)
00:29:59.786  Available Spare:             0%
00:29:59.786  Available Spare Threshold:   0%
00:29:59.786  Life Percentage [2024-12-14 13:55:59.324750] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce880 length 0x40 lkey 0x181c00
00:29:59.786  [2024-12-14 13:55:59.324766] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:29:59.786  [2024-12-14 13:55:59.324792] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.786  [2024-12-14 13:55:59.324804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0
00:29:59.786  [2024-12-14 13:55:59.324813] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd518 length 0x10 lkey 0x181c00
00:29:59.786  [2024-12-14 13:55:59.324866] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD
00:29:59.786  [2024-12-14 13:55:59.324882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:29:59.786  [2024-12-14 13:55:59.324896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:29:59.786  [2024-12-14 13:55:59.324906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:29:59.786  [2024-12-14 13:55:59.324918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:29:59.786  [2024-12-14 13:55:59.324937] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce4c0 length 0x40 lkey 0x181c00
00:29:59.786  [2024-12-14 13:55:59.324955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:29:59.786  [2024-12-14 13:55:59.324970] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.786  [2024-12-14 13:55:59.324981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0
00:29:59.786  [2024-12-14 13:55:59.324994] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x181c00
00:29:59.786  [2024-12-14 13:55:59.325008] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:29:59.786  [2024-12-14 13:55:59.325019] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd540 length 0x10 lkey 0x181c00
00:29:59.786  [2024-12-14 13:55:59.325038] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.786  [2024-12-14 13:55:59.325050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0
00:29:59.786  [2024-12-14 13:55:59.325061] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us
00:29:59.786  [2024-12-14 13:55:59.325071] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms
00:29:59.786  [2024-12-14 13:55:59.325087] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd568 length 0x10 lkey 0x181c00
00:29:59.786  [2024-12-14 13:55:59.325099] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x181c00
00:29:59.786  [2024-12-14 13:55:59.325115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:29:59.786  [2024-12-14 13:55:59.325132] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.786  [2024-12-14 13:55:59.325143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0
00:29:59.786  [2024-12-14 13:55:59.325152] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd590 length 0x10 lkey 0x181c00
00:29:59.786  [2024-12-14 13:55:59.325166] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x181c00
00:29:59.786  [2024-12-14 13:55:59.325177] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:29:59.786  [2024-12-14 13:55:59.325201] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.786  [2024-12-14 13:55:59.325210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0
00:29:59.786  [2024-12-14 13:55:59.325221] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5b8 length 0x10 lkey 0x181c00
00:29:59.786  [2024-12-14 13:55:59.325235] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x181c00
00:29:59.786  [2024-12-14 13:55:59.325248] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:29:59.786  [2024-12-14 13:55:59.325265] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.786  [2024-12-14 13:55:59.325276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0
00:29:59.786  [2024-12-14 13:55:59.325285] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5e0 length 0x10 lkey 0x181c00
00:29:59.786  [2024-12-14 13:55:59.325302] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x181c00
00:29:59.786  [2024-12-14 13:55:59.325313] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:29:59.786  [2024-12-14 13:55:59.325338] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.786  [2024-12-14 13:55:59.325346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0
00:29:59.786  [2024-12-14 13:55:59.325357] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd608 length 0x10 lkey 0x181c00
00:29:59.786  [2024-12-14 13:55:59.325369] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x181c00
00:29:59.786  [2024-12-14 13:55:59.325382] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:29:59.786  [2024-12-14 13:55:59.325417] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.786  [2024-12-14 13:55:59.325431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0
00:29:59.786  [2024-12-14 13:55:59.325442] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd630 length 0x10 lkey 0x181c00
00:29:59.786  [2024-12-14 13:55:59.325459] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x181c00
00:29:59.786  [2024-12-14 13:55:59.325470] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:29:59.786  [2024-12-14 13:55:59.325508] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.786  [2024-12-14 13:55:59.325517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0
00:29:59.786  [2024-12-14 13:55:59.325527] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd180 length 0x10 lkey 0x181c00
00:29:59.786  [2024-12-14 13:55:59.325539] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x181c00
00:29:59.786  [2024-12-14 13:55:59.325555] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:29:59.786  [2024-12-14 13:55:59.325572] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.786  [2024-12-14 13:55:59.325583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0
00:29:59.786  [2024-12-14 13:55:59.325592] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd1a8 length 0x10 lkey 0x181c00
00:29:59.786  [2024-12-14 13:55:59.325606] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x181c00
00:29:59.786  [2024-12-14 13:55:59.325621] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:29:59.786  [2024-12-14 13:55:59.325640] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.786  [2024-12-14 13:55:59.325648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0
00:29:59.786  [2024-12-14 13:55:59.325659] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd1d0 length 0x10 lkey 0x181c00
00:29:59.786  [2024-12-14 13:55:59.325675] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x181c00
00:29:59.786  [2024-12-14 13:55:59.325690] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:29:59.786  [2024-12-14 13:55:59.325712] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.786  [2024-12-14 13:55:59.325722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0
00:29:59.786  [2024-12-14 13:55:59.325731] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd1f8 length 0x10 lkey 0x181c00
00:29:59.786  [2024-12-14 13:55:59.325745] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x181c00
00:29:59.786  [2024-12-14 13:55:59.325756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:29:59.786  [2024-12-14 13:55:59.325781] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.786  [2024-12-14 13:55:59.325789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0
00:29:59.786  [2024-12-14 13:55:59.325800] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd220 length 0x10 lkey 0x181c00
00:29:59.786  [2024-12-14 13:55:59.325816] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x181c00
00:29:59.786  [2024-12-14 13:55:59.325829] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:29:59.786  [2024-12-14 13:55:59.325849] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.786  [2024-12-14 13:55:59.325861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0
00:29:59.786  [2024-12-14 13:55:59.325869] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd248 length 0x10 lkey 0x181c00
00:29:59.786  [2024-12-14 13:55:59.325883] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x181c00
00:29:59.787  [2024-12-14 13:55:59.325894] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:29:59.787  [2024-12-14 13:55:59.325922] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.787  [2024-12-14 13:55:59.329944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0
00:29:59.787  [2024-12-14 13:55:59.329964] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd270 length 0x10 lkey 0x181c00
00:29:59.787  [2024-12-14 13:55:59.329980] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x181c00
00:29:59.787  [2024-12-14 13:55:59.329995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0
00:29:59.787  [2024-12-14 13:55:59.330021] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion
00:29:59.787  [2024-12-14 13:55:59.330038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0007 p:0 m:0 dnr:0
00:29:59.787  [2024-12-14 13:55:59.330047] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd298 length 0x10 lkey 0x181c00
00:29:59.787  [2024-12-14 13:55:59.330059] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds
00:29:59.787  Used:        0%
00:29:59.787  Data Units Read:             0
00:29:59.787  Data Units Written:          0
00:29:59.787  Host Read Commands:          0
00:29:59.787  Host Write Commands:         0
00:29:59.787  Controller Busy Time:        0 minutes
00:29:59.787  Power Cycles:                0
00:29:59.787  Power On Hours:              0 hours
00:29:59.787  Unsafe Shutdowns:            0
00:29:59.787  Unrecoverable Media Errors:  0
00:29:59.787  Lifetime Error Log Entries:  0
00:29:59.787  Warning Temperature Time:    0 minutes
00:29:59.787  Critical Temperature Time:   0 minutes
00:29:59.787  
00:29:59.787  Number of Queues
00:29:59.787  ================
00:29:59.787  Number of I/O Submission Queues:      127
00:29:59.787  Number of I/O Completion Queues:      127
00:29:59.787  
00:29:59.787  Active Namespaces
00:29:59.787  =================
00:29:59.787  Namespace ID:1
00:29:59.787  Error Recovery Timeout:                Unlimited
00:29:59.787  Command Set Identifier:                NVM (00h)
00:29:59.787  Deallocate:                            Supported
00:29:59.787  Deallocated/Unwritten Error:           Not Supported
00:29:59.787  Deallocated Read Value:                Unknown
00:29:59.787  Deallocate in Write Zeroes:            Not Supported
00:29:59.787  Deallocated Guard Field:               0xFFFF
00:29:59.787  Flush:                                 Supported
00:29:59.787  Reservation:                           Supported
00:29:59.787  Namespace Sharing Capabilities:        Multiple Controllers
00:29:59.787  Size (in LBAs):                        131072 (0GiB)
00:29:59.787  Capacity (in LBAs):                    131072 (0GiB)
00:29:59.787  Utilization (in LBAs):                 131072 (0GiB)
00:29:59.787  NGUID:                                 ABCDEF0123456789ABCDEF0123456789
00:29:59.787  EUI64:                                 ABCDEF0123456789
00:29:59.787  UUID:                                  f1450eeb-5f61-41a2-9c16-f1f5ca067973
00:29:59.787  Thin Provisioning:                     Not Supported
00:29:59.787  Per-NS Atomic Units:                   Yes
00:29:59.787    Atomic Boundary Size (Normal):       0
00:29:59.787    Atomic Boundary Size (PFail):        0
00:29:59.787    Atomic Boundary Offset:              0
00:29:59.787  Maximum Single Source Range Length:    65535
00:29:59.787  Maximum Copy Length:                   65535
00:29:59.787  Maximum Source Range Count:            1
00:29:59.787  NGUID/EUI64 Never Reused:              No
00:29:59.787  Namespace Write Protected:             No
00:29:59.787  Number of LBA Formats:                 1
00:29:59.787  Current LBA Format:                    LBA Format #00
00:29:59.787  LBA Format #00: Data Size:   512  Metadata Size:     0
00:29:59.787  
00:29:59.787   13:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync
00:29:59.787   13:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:29:59.787   13:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:59.787   13:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x
00:29:59.787   13:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:59.787   13:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT
00:29:59.787   13:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini
00:29:59.787   13:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup
00:29:59.787   13:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync
00:29:59.787   13:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' rdma == tcp ']'
00:29:59.787   13:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' rdma == rdma ']'
00:29:59.787   13:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e
00:29:59.787   13:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20}
00:29:59.787   13:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma
00:29:59.787  rmmod nvme_rdma
00:29:59.787  rmmod nvme_fabrics
00:29:59.787   13:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:29:59.787   13:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e
00:29:59.787   13:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0
00:29:59.787   13:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 3460437 ']'
00:29:59.787   13:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 3460437
00:29:59.787   13:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 3460437 ']'
00:29:59.787   13:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 3460437
00:29:59.787    13:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname
00:29:59.787   13:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:29:59.787    13:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3460437
00:30:00.044   13:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:30:00.044   13:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:30:00.045   13:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3460437'
00:30:00.045  killing process with pid 3460437
00:30:00.045   13:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 3460437
00:30:00.045   13:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 3460437
00:30:01.940   13:56:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:30:01.940   13:56:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]]
00:30:01.940  
00:30:01.940  real	0m10.776s
00:30:01.940  user	0m14.584s
00:30:01.940  sys	0m5.828s
00:30:01.940   13:56:01 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable
00:30:01.940   13:56:01 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x
00:30:01.940  ************************************
00:30:01.940  END TEST nvmf_identify
00:30:01.940  ************************************
00:30:01.940   13:56:01 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma
00:30:01.940   13:56:01 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:30:01.940   13:56:01 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable
00:30:01.940   13:56:01 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:30:01.940  ************************************
00:30:01.940  START TEST nvmf_perf
00:30:01.940  ************************************
00:30:01.940   13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma
00:30:01.940  * Looking for test storage...
00:30:01.940  * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host
00:30:01.940    13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:30:01.940     13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version
00:30:01.940     13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:30:01.940    13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:30:01.940    13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:30:01.940    13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l
00:30:01.940    13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l
00:30:01.940    13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-:
00:30:01.940    13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1
00:30:01.940    13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-:
00:30:01.940    13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2
00:30:01.940    13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<'
00:30:01.940    13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2
00:30:01.940    13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1
00:30:01.940    13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:30:01.940    13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in
00:30:01.940    13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1
00:30:01.940    13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 ))
00:30:01.940    13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:30:01.940     13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1
00:30:01.940     13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1
00:30:01.940     13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:30:01.940     13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1
00:30:01.940    13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1
00:30:01.940     13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2
00:30:01.940     13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2
00:30:01.940     13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:30:01.940     13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2
00:30:01.940    13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2
00:30:01.940    13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:30:01.940    13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:30:01.940    13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0
00:30:01.941    13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:30:01.941    13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:30:01.941  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:30:01.941  		--rc genhtml_branch_coverage=1
00:30:01.941  		--rc genhtml_function_coverage=1
00:30:01.941  		--rc genhtml_legend=1
00:30:01.941  		--rc geninfo_all_blocks=1
00:30:01.941  		--rc geninfo_unexecuted_blocks=1
00:30:01.941  		
00:30:01.941  		'
00:30:01.941    13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:30:01.941  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:30:01.941  		--rc genhtml_branch_coverage=1
00:30:01.941  		--rc genhtml_function_coverage=1
00:30:01.941  		--rc genhtml_legend=1
00:30:01.941  		--rc geninfo_all_blocks=1
00:30:01.941  		--rc geninfo_unexecuted_blocks=1
00:30:01.941  		
00:30:01.941  		'
00:30:01.941    13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:30:01.941  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:30:01.941  		--rc genhtml_branch_coverage=1
00:30:01.941  		--rc genhtml_function_coverage=1
00:30:01.941  		--rc genhtml_legend=1
00:30:01.941  		--rc geninfo_all_blocks=1
00:30:01.941  		--rc geninfo_unexecuted_blocks=1
00:30:01.941  		
00:30:01.941  		'
00:30:01.941    13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:30:01.941  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:30:01.941  		--rc genhtml_branch_coverage=1
00:30:01.941  		--rc genhtml_function_coverage=1
00:30:01.941  		--rc genhtml_legend=1
00:30:01.941  		--rc geninfo_all_blocks=1
00:30:01.941  		--rc geninfo_unexecuted_blocks=1
00:30:01.941  		
00:30:01.941  		'
00:30:01.941   13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh
00:30:01.941     13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s
00:30:01.941    13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:30:01.941    13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:30:01.941    13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:30:01.941    13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:30:01.941    13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:30:01.941    13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:30:01.941    13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:30:01.941    13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:30:01.941    13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:30:01.941     13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:30:02.198    13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:30:02.198    13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e
00:30:02.198    13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:30:02.198    13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:30:02.198    13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:30:02.198    13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:30:02.198    13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh
00:30:02.198     13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob
00:30:02.198     13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:30:02.198     13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:30:02.198     13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:30:02.198      13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:30:02.198      13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:30:02.198      13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:30:02.198      13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH
00:30:02.198      13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:30:02.198    13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0
00:30:02.198    13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:30:02.198    13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:30:02.198    13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:30:02.198    13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:30:02.198    13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:30:02.198    13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:30:02.198  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:30:02.198    13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:30:02.198    13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:30:02.198    13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0
00:30:02.198   13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64
00:30:02.198   13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512
00:30:02.198   13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py
00:30:02.198   13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit
00:30:02.198   13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z rdma ']'
00:30:02.198   13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:30:02.198   13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs
00:30:02.198   13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no
00:30:02.198   13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns
00:30:02.198   13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:30:02.198   13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:30:02.198    13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:30:02.198   13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:30:02.198   13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:30:02.198   13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable
00:30:02.198   13:56:01 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x
00:30:08.753   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:30:08.753   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=()
00:30:08.753   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs
00:30:08.753   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=()
00:30:08.753   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:30:08.753   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=()
00:30:08.753   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers
00:30:08.753   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=()
00:30:08.753   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs
00:30:08.753   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=()
00:30:08.753   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810
00:30:08.753   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=()
00:30:08.753   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722
00:30:08.753   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=()
00:30:08.753   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx
00:30:08.753   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:30:08.753   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:30:08.753   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:30:08.753   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:30:08.753   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ rdma == rdma ]]
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}")
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}")
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]]
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}")
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)'
00:30:08.754  Found 0000:d9:00.0 (0x15b3 - 0x1015)
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)'
00:30:08.754  Found 0000:d9:00.1 (0x15b3 - 0x1015)
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]]
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0'
00:30:08.754  Found net devices under 0000:d9:00.0: mlx_0_0
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1'
00:30:08.754  Found net devices under 0000:d9:00.1: mlx_0_1
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ rdma == tcp ]]
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@447 -- # [[ rdma == rdma ]]
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # rdma_device_init
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@529 -- # load_ib_rdma_modules
00:30:08.754    13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@62 -- # uname
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']'
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@66 -- # modprobe ib_cm
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@67 -- # modprobe ib_core
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@68 -- # modprobe ib_umad
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@69 -- # modprobe ib_uverbs
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@70 -- # modprobe iw_cm
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@71 -- # modprobe rdma_cm
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@72 -- # modprobe rdma_ucm
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@530 -- # allocate_nic_ips
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR ))
00:30:08.754    13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # get_rdma_if_list
00:30:08.754    13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:30:08.754    13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:30:08.754     13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:30:08.754     13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:30:08.754    13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:30:08.754    13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:30:08.754    13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:30:08.754    13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:30:08.754    13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_0
00:30:08.754    13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2
00:30:08.754    13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:30:08.754    13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:30:08.754    13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:30:08.754    13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:30:08.754    13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:30:08.754    13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_1
00:30:08.754    13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:30:08.754    13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0
00:30:08.754    13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:30:08.754    13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:30:08.754    13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}'
00:30:08.754    13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # ip=192.168.100.8
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]]
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@85 -- # ip addr show mlx_0_0
00:30:08.754  6: mlx_0_0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:30:08.754      link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff
00:30:08.754      altname enp217s0f0np0
00:30:08.754      altname ens818f0np0
00:30:08.754      inet 192.168.100.8/24 scope global mlx_0_0
00:30:08.754         valid_lft forever preferred_lft forever
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:30:08.754    13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1
00:30:08.754    13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:30:08.754    13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:30:08.754    13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}'
00:30:08.754    13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # ip=192.168.100.9
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]]
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@85 -- # ip addr show mlx_0_1
00:30:08.754  7: mlx_0_1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:30:08.754      link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff
00:30:08.754      altname enp217s0f1np1
00:30:08.754      altname ens818f1np1
00:30:08.754      inet 192.168.100.9/24 scope global mlx_0_1
00:30:08.754         valid_lft forever preferred_lft forever
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma'
00:30:08.754   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]]
00:30:08.754    13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # get_available_rdma_ips
00:30:08.754     13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # get_rdma_if_list
00:30:08.754     13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:30:08.754     13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:30:08.754      13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:30:08.754      13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:30:08.754     13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:30:08.755     13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:30:08.755     13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:30:08.755     13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:30:08.755     13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_0
00:30:08.755     13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2
00:30:08.755     13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:30:08.755     13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:30:08.755     13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:30:08.755     13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:30:08.755     13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:30:08.755     13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_1
00:30:08.755     13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2
00:30:08.755    13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:30:08.755    13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0
00:30:08.755    13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:30:08.755    13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:30:08.755    13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}'
00:30:08.755    13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1
00:30:08.755    13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:30:08.755    13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1
00:30:08.755    13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:30:08.755    13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:30:08.755    13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}'
00:30:08.755    13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1
00:30:08.755   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8
00:30:08.755  192.168.100.9'
00:30:08.755    13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # echo '192.168.100.8
00:30:08.755  192.168.100.9'
00:30:08.755    13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # head -n 1
00:30:08.755   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8
00:30:08.755    13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # head -n 1
00:30:08.755    13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # echo '192.168.100.8
00:30:08.755  192.168.100.9'
00:30:08.755    13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # tail -n +2
00:30:08.755   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9
00:30:08.755   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']'
00:30:08.755   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024'
00:30:08.755   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' rdma == tcp ']'
00:30:08.755   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' rdma == rdma ']'
00:30:08.755   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-rdma
00:30:08.755   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF
00:30:08.755   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:30:08.755   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable
00:30:08.755   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x
00:30:08.755   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=3464379
00:30:08.755   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 3464379
00:30:08.755   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:30:08.755   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 3464379 ']'
00:30:08.755   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:30:08.755   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100
00:30:08.755   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:30:08.755  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:30:08.755   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable
00:30:08.755   13:56:08 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x
00:30:08.755  [2024-12-14 13:56:08.377644] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:30:08.755  [2024-12-14 13:56:08.377764] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:30:09.013  [2024-12-14 13:56:08.512650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:30:09.013  [2024-12-14 13:56:08.612530] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:30:09.013  [2024-12-14 13:56:08.612582] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:30:09.013  [2024-12-14 13:56:08.612594] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:30:09.013  [2024-12-14 13:56:08.612607] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:30:09.013  [2024-12-14 13:56:08.612617] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:30:09.013  [2024-12-14 13:56:08.615011] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:30:09.013  [2024-12-14 13:56:08.615085] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:30:09.013  [2024-12-14 13:56:08.615189] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:30:09.013  [2024-12-14 13:56:08.615198] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:30:09.577   13:56:09 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:30:09.577   13:56:09 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0
00:30:09.577   13:56:09 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:30:09.578   13:56:09 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable
00:30:09.578   13:56:09 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x
00:30:09.578   13:56:09 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:30:09.578   13:56:09 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh
00:30:09.578   13:56:09 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_subsystem_config
00:30:12.854    13:56:12 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev
00:30:12.854    13:56:12 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr'
00:30:12.854   13:56:12 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:d8:00.0
00:30:12.854    13:56:12 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:30:13.112   13:56:12 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0'
00:30:13.112   13:56:12 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:d8:00.0 ']'
00:30:13.112   13:56:12 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1'
00:30:13.112   13:56:12 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' rdma == rdma ']'
00:30:13.112   13:56:12 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0
00:30:13.369  [2024-12-14 13:56:12.969267] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16
00:30:13.369  [2024-12-14 13:56:12.993869] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000292c0/0x7f5b94dbd940) succeed.
00:30:13.370  [2024-12-14 13:56:13.003728] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000029440/0x7f5b94d79940) succeed.
00:30:13.627   13:56:13 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:30:13.885   13:56:13 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs
00:30:13.885   13:56:13 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:30:13.885   13:56:13 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs
00:30:13.885   13:56:13 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1
00:30:14.142   13:56:13 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420
00:30:14.400  [2024-12-14 13:56:13.944404] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 ***
00:30:14.400   13:56:13 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420
00:30:14.657   13:56:14 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:d8:00.0 ']'
00:30:14.657   13:56:14 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0'
00:30:14.657   13:56:14 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']'
00:30:14.657   13:56:14 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0'
00:30:16.026  Initializing NVMe Controllers
00:30:16.026  Attached to NVMe Controller at 0000:d8:00.0 [8086:0a54]
00:30:16.026  Associating PCIE (0000:d8:00.0) NSID 1 with lcore 0
00:30:16.026  Initialization complete. Launching workers.
00:30:16.026  ========================================================
00:30:16.026                                                                             Latency(us)
00:30:16.026  Device Information                     :       IOPS      MiB/s    Average        min        max
00:30:16.026  PCIE (0000:d8:00.0) NSID 1 from core  0:   93281.70     364.38     342.45      36.94    6237.86
00:30:16.027  ========================================================
00:30:16.027  Total                                  :   93281.70     364.38     342.45      36.94    6237.86
00:30:16.027  
00:30:16.027   13:56:15 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420'
00:30:19.393  Initializing NVMe Controllers
00:30:19.393  Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1
00:30:19.393  Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0
00:30:19.393  Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0
00:30:19.393  Initialization complete. Launching workers.
00:30:19.393  ========================================================
00:30:19.393                                                                                                                     Latency(us)
00:30:19.393  Device Information                                                             :       IOPS      MiB/s    Average        min        max
00:30:19.393  RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  0:    6006.99      23.46     166.08      60.28    7033.81
00:30:19.393  RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core  0:    4654.00      18.18     214.45      83.80    7103.03
00:30:19.393  ========================================================
00:30:19.393  Total                                                                          :   10660.99      41.64     187.20      60.28    7103.03
00:30:19.393  
00:30:19.650   13:56:19 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420'
00:30:22.927  Initializing NVMe Controllers
00:30:22.927  Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1
00:30:22.927  Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0
00:30:22.927  Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0
00:30:22.927  Initialization complete. Launching workers.
00:30:22.927  ========================================================
00:30:22.927                                                                                                                     Latency(us)
00:30:22.927  Device Information                                                             :       IOPS      MiB/s    Average        min        max
00:30:22.927  RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  0:   16101.97      62.90    1988.79     546.54    5723.94
00:30:22.927  RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core  0:    4031.99      15.75    7962.87    5599.28    8417.19
00:30:22.927  ========================================================
00:30:22.927  Total                                                                          :   20133.96      78.65    3185.15     546.54    8417.19
00:30:22.927  
00:30:23.185   13:56:22 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ mlx5 == \e\8\1\0 ]]
00:30:23.185   13:56:22 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420'
00:30:28.447  Initializing NVMe Controllers
00:30:28.447  Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1
00:30:28.447  Controller IO queue size 128, less than required.
00:30:28.447  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:30:28.447  Controller IO queue size 128, less than required.
00:30:28.447  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:30:28.447  Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0
00:30:28.447  Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0
00:30:28.447  Initialization complete. Launching workers.
00:30:28.447  ========================================================
00:30:28.447                                                                                                                     Latency(us)
00:30:28.447  Device Information                                                             :       IOPS      MiB/s    Average        min        max
00:30:28.447  RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  0:    3314.00     828.50   39322.32   15956.09  408220.41
00:30:28.447  RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core  0:    3492.00     873.00   37521.96   16683.29  400002.11
00:30:28.448  ========================================================
00:30:28.448  Total                                                                          :    6806.00    1701.50   38398.60   15956.09  408220.41
00:30:28.448  
00:30:28.448   13:56:27 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4
00:30:28.448  No valid NVMe controllers or AIO or URING devices found
00:30:28.448  Initializing NVMe Controllers
00:30:28.448  Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1
00:30:28.448  Controller IO queue size 128, less than required.
00:30:28.448  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:30:28.448  WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test
00:30:28.448  Controller IO queue size 128, less than required.
00:30:28.448  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:30:28.448  WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test
00:30:28.448  WARNING: Some requested NVMe devices were skipped
00:30:28.705   13:56:28 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat
00:30:33.965  Initializing NVMe Controllers
00:30:33.965  Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1
00:30:33.965  Controller IO queue size 128, less than required.
00:30:33.965  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:30:33.965  Controller IO queue size 128, less than required.
00:30:33.965  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:30:33.965  Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0
00:30:33.965  Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0
00:30:33.965  Initialization complete. Launching workers.
00:30:33.965  
00:30:33.965  ====================
00:30:33.965  lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics:
00:30:33.965  RDMA transport:
00:30:33.965  	dev name:              mlx5_0
00:30:33.965  	polls:                 311867
00:30:33.965  	idle_polls:            309406
00:30:33.965  	completions:           36594
00:30:33.965  	queued_requests:       1
00:30:33.965  	total_send_wrs:        18297
00:30:33.965  	send_doorbell_updates: 2245
00:30:33.965  	total_recv_wrs:        18424
00:30:33.965  	recv_doorbell_updates: 2246
00:30:33.965  	---------------------------------
00:30:33.965  
00:30:33.965  ====================
00:30:33.965  lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics:
00:30:33.965  RDMA transport:
00:30:33.965  	dev name:              mlx5_0
00:30:33.965  	polls:                 310197
00:30:33.965  	idle_polls:            309958
00:30:33.965  	completions:           17342
00:30:33.965  	queued_requests:       1
00:30:33.965  	total_send_wrs:        8671
00:30:33.965  	send_doorbell_updates: 234
00:30:33.965  	total_recv_wrs:        8798
00:30:33.965  	recv_doorbell_updates: 235
00:30:33.965  	---------------------------------
00:30:33.965  ========================================================
00:30:33.965                                                                                                                     Latency(us)
00:30:33.965  Device Information                                                             :       IOPS      MiB/s    Average        min        max
00:30:33.965  RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  0:    4574.00    1143.50   28489.13   13315.45  390020.00
00:30:33.965  RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core  0:    2167.50     541.87   60470.69   32004.56  398510.81
00:30:33.965  ========================================================
00:30:33.965  Total                                                                          :    6741.49    1685.37   38771.71   13315.45  398510.81
00:30:33.965  
00:30:33.965   13:56:33 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync
00:30:33.965   13:56:33 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:30:33.965   13:56:33 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']'
00:30:33.965   13:56:33 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:d8:00.0 ']'
00:30:33.965    13:56:33 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0
00:30:40.514   13:56:39 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=a5bd4d12-7cb3-4616-919d-88b571e61186
00:30:40.514   13:56:39 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb a5bd4d12-7cb3-4616-919d-88b571e61186
00:30:40.514   13:56:39 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=a5bd4d12-7cb3-4616-919d-88b571e61186
00:30:40.514   13:56:39 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info
00:30:40.514   13:56:39 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc
00:30:40.514   13:56:39 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs
00:30:40.514    13:56:39 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores
00:30:40.514   13:56:39 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[
00:30:40.514    {
00:30:40.514      "uuid": "a5bd4d12-7cb3-4616-919d-88b571e61186",
00:30:40.514      "name": "lvs_0",
00:30:40.514      "base_bdev": "Nvme0n1",
00:30:40.514      "total_data_clusters": 476466,
00:30:40.514      "free_clusters": 476466,
00:30:40.514      "block_size": 512,
00:30:40.514      "cluster_size": 4194304
00:30:40.514    }
00:30:40.514  ]'
00:30:40.514    13:56:39 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="a5bd4d12-7cb3-4616-919d-88b571e61186") .free_clusters'
00:30:40.514   13:56:39 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=476466
00:30:40.514    13:56:39 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="a5bd4d12-7cb3-4616-919d-88b571e61186") .cluster_size'
00:30:40.514   13:56:39 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304
00:30:40.514   13:56:39 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=1905864
00:30:40.514   13:56:39 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 1905864
00:30:40.514  1905864
00:30:40.514   13:56:39 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 1905864 -gt 20480 ']'
00:30:40.514   13:56:39 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480
00:30:40.514    13:56:39 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a5bd4d12-7cb3-4616-919d-88b571e61186 lbd_0 20480
00:30:40.514   13:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=3e8c3bd2-b11b-4b1a-85a2-aaaa9b8d4846
00:30:40.514    13:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 3e8c3bd2-b11b-4b1a-85a2-aaaa9b8d4846 lvs_n_0
00:30:43.039   13:56:42 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=552dc837-9810-427a-8cd9-77db9d868ed9
00:30:43.039   13:56:42 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 552dc837-9810-427a-8cd9-77db9d868ed9
00:30:43.039   13:56:42 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=552dc837-9810-427a-8cd9-77db9d868ed9
00:30:43.039   13:56:42 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info
00:30:43.039   13:56:42 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc
00:30:43.039   13:56:42 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs
00:30:43.039    13:56:42 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores
00:30:43.039   13:56:42 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[
00:30:43.039    {
00:30:43.039      "uuid": "a5bd4d12-7cb3-4616-919d-88b571e61186",
00:30:43.039      "name": "lvs_0",
00:30:43.039      "base_bdev": "Nvme0n1",
00:30:43.039      "total_data_clusters": 476466,
00:30:43.039      "free_clusters": 471346,
00:30:43.039      "block_size": 512,
00:30:43.039      "cluster_size": 4194304
00:30:43.039    },
00:30:43.039    {
00:30:43.039      "uuid": "552dc837-9810-427a-8cd9-77db9d868ed9",
00:30:43.039      "name": "lvs_n_0",
00:30:43.039      "base_bdev": "3e8c3bd2-b11b-4b1a-85a2-aaaa9b8d4846",
00:30:43.039      "total_data_clusters": 5114,
00:30:43.039      "free_clusters": 5114,
00:30:43.039      "block_size": 512,
00:30:43.039      "cluster_size": 4194304
00:30:43.039    }
00:30:43.039  ]'
00:30:43.039    13:56:42 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="552dc837-9810-427a-8cd9-77db9d868ed9") .free_clusters'
00:30:43.039   13:56:42 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=5114
00:30:43.039    13:56:42 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="552dc837-9810-427a-8cd9-77db9d868ed9") .cluster_size'
00:30:43.039   13:56:42 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304
00:30:43.039   13:56:42 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=20456
00:30:43.039   13:56:42 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 20456
00:30:43.039  20456
00:30:43.039   13:56:42 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']'
00:30:43.039    13:56:42 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 552dc837-9810-427a-8cd9-77db9d868ed9 lbd_nest_0 20456
00:30:43.039   13:56:42 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=4f5e6136-dc23-406d-a591-c6c8322425a0
00:30:43.039   13:56:42 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:30:43.297   13:56:42 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid
00:30:43.297   13:56:42 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 4f5e6136-dc23-406d-a591-c6c8322425a0
00:30:43.554   13:56:43 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420
00:30:43.554   13:56:43 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128")
00:30:43.554   13:56:43 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072")
00:30:43.554   13:56:43 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}"
00:30:43.554   13:56:43 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}"
00:30:43.554   13:56:43 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420'
00:30:55.745  Initializing NVMe Controllers
00:30:55.745  Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1
00:30:55.745  Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0
00:30:55.745  Initialization complete. Launching workers.
00:30:55.745  ========================================================
00:30:55.745                                                                                                                     Latency(us)
00:30:55.745  Device Information                                                             :       IOPS      MiB/s    Average        min        max
00:30:55.745  RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  0:    5132.18       2.51     194.34      78.75    7283.17
00:30:55.745  ========================================================
00:30:55.745  Total                                                                          :    5132.18       2.51     194.34      78.75    7283.17
00:30:55.745  
00:30:55.745   13:56:54 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}"
00:30:55.745   13:56:54 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420'
00:31:07.929  Initializing NVMe Controllers
00:31:07.929  Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1
00:31:07.930  Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0
00:31:07.930  Initialization complete. Launching workers.
00:31:07.930  ========================================================
00:31:07.930                                                                                                                     Latency(us)
00:31:07.930  Device Information                                                             :       IOPS      MiB/s    Average        min        max
00:31:07.930  RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  0:    2465.50     308.19     404.37     175.41    7190.21
00:31:07.930  ========================================================
00:31:07.930  Total                                                                          :    2465.50     308.19     404.37     175.41    7190.21
00:31:07.930  
00:31:07.930   13:57:06 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}"
00:31:07.930   13:57:06 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}"
00:31:07.930   13:57:06 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420'
00:31:20.111  Initializing NVMe Controllers
00:31:20.111  Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1
00:31:20.111  Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0
00:31:20.111  Initialization complete. Launching workers.
00:31:20.111  ========================================================
00:31:20.111                                                                                                                     Latency(us)
00:31:20.111  Device Information                                                             :       IOPS      MiB/s    Average        min        max
00:31:20.111  RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  0:   10259.80       5.01    3118.30    1186.60    8801.97
00:31:20.111  ========================================================
00:31:20.111  Total                                                                          :   10259.80       5.01    3118.30    1186.60    8801.97
00:31:20.111  
00:31:20.111   13:57:17 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}"
00:31:20.111   13:57:17 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420'
00:31:30.070  Initializing NVMe Controllers
00:31:30.070  Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1
00:31:30.070  Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0
00:31:30.070  Initialization complete. Launching workers.
00:31:30.070  ========================================================
00:31:30.070                                                                                                                     Latency(us)
00:31:30.070  Device Information                                                             :       IOPS      MiB/s    Average        min        max
00:31:30.070  RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  0:    3998.70     499.84    8007.22    3866.07   25790.87
00:31:30.070  ========================================================
00:31:30.070  Total                                                                          :    3998.70     499.84    8007.22    3866.07   25790.87
00:31:30.070  
00:31:30.070   13:57:29 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}"
00:31:30.070   13:57:29 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}"
00:31:30.070   13:57:29 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420'
00:31:42.356  Initializing NVMe Controllers
00:31:42.356  Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1
00:31:42.356  Controller IO queue size 128, less than required.
00:31:42.356  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:31:42.356  Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0
00:31:42.356  Initialization complete. Launching workers.
00:31:42.356  ========================================================
00:31:42.356                                                                                                                     Latency(us)
00:31:42.356  Device Information                                                             :       IOPS      MiB/s    Average        min        max
00:31:42.356  RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  0:   16614.40       8.11    7706.15    2329.10   16684.22
00:31:42.356  ========================================================
00:31:42.356  Total                                                                          :   16614.40       8.11    7706.15    2329.10   16684.22
00:31:42.356  
00:31:42.356   13:57:40 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}"
00:31:42.356   13:57:40 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420'
00:31:54.557  Initializing NVMe Controllers
00:31:54.557  Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1
00:31:54.557  Controller IO queue size 128, less than required.
00:31:54.557  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:31:54.557  Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0
00:31:54.557  Initialization complete. Launching workers.
00:31:54.557  ========================================================
00:31:54.557                                                                                                                     Latency(us)
00:31:54.557  Device Information                                                             :       IOPS      MiB/s    Average        min        max
00:31:54.557  RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  0:    9815.00    1226.88   13042.36    3815.26   90830.77
00:31:54.557  ========================================================
00:31:54.557  Total                                                                          :    9815.00    1226.88   13042.36    3815.26   90830.77
00:31:54.557  
00:31:54.557   13:57:52 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:31:54.557   13:57:52 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4f5e6136-dc23-406d-a591-c6c8322425a0
00:31:54.557   13:57:53 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0
00:31:54.557   13:57:53 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3e8c3bd2-b11b-4b1a-85a2-aaaa9b8d4846
00:31:54.557   13:57:53 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0
00:31:54.557   13:57:54 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT
00:31:54.557   13:57:54 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini
00:31:54.557   13:57:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup
00:31:54.557   13:57:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync
00:31:54.557   13:57:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' rdma == tcp ']'
00:31:54.557   13:57:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' rdma == rdma ']'
00:31:54.557   13:57:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e
00:31:54.557   13:57:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20}
00:31:54.557   13:57:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma
00:31:54.557  rmmod nvme_rdma
00:31:54.557  rmmod nvme_fabrics
00:31:54.557   13:57:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:31:54.557   13:57:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e
00:31:54.557   13:57:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0
00:31:54.557   13:57:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 3464379 ']'
00:31:54.557   13:57:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 3464379
00:31:54.557   13:57:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 3464379 ']'
00:31:54.557   13:57:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 3464379
00:31:54.557    13:57:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname
00:31:54.557   13:57:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:31:54.557    13:57:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3464379
00:31:54.557   13:57:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:31:54.557   13:57:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:31:54.557   13:57:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3464379'
00:31:54.557  killing process with pid 3464379
00:31:54.557   13:57:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 3464379
00:31:54.557   13:57:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 3464379
00:31:58.747   13:57:57 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:31:58.747   13:57:57 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]]
00:31:58.747  
00:31:58.747  real	1m56.257s
00:31:58.747  user	7m18.772s
00:31:58.747  sys	0m8.255s
00:31:58.747   13:57:57 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:31:58.747   13:57:57 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x
00:31:58.747  ************************************
00:31:58.747  END TEST nvmf_perf
00:31:58.747  ************************************
00:31:58.747   13:57:57 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma
00:31:58.747   13:57:57 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:31:58.747   13:57:57 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable
00:31:58.747   13:57:57 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:31:58.747  ************************************
00:31:58.747  START TEST nvmf_fio_host
00:31:58.747  ************************************
00:31:58.747   13:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma
00:31:58.747  * Looking for test storage...
00:31:58.747  * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host
00:31:58.747    13:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:31:58.747     13:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version
00:31:58.747     13:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:31:58.747    13:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:31:58.747    13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:31:58.747    13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l
00:31:58.747    13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l
00:31:58.747    13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-:
00:31:58.747    13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1
00:31:58.747    13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-:
00:31:58.747    13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2
00:31:58.747    13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<'
00:31:58.747    13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2
00:31:58.747    13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1
00:31:58.747    13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:31:58.747    13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in
00:31:58.747    13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1
00:31:58.747    13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 ))
00:31:58.747    13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:31:58.747     13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1
00:31:58.747     13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1
00:31:58.747     13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:31:58.747     13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1
00:31:58.747    13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1
00:31:58.747     13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2
00:31:58.747     13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2
00:31:58.747     13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:31:58.747     13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2
00:31:58.747    13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2
00:31:58.747    13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:31:58.747    13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:31:58.747    13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0
00:31:58.747    13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:31:58.747    13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:31:58.747  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:31:58.747  		--rc genhtml_branch_coverage=1
00:31:58.747  		--rc genhtml_function_coverage=1
00:31:58.747  		--rc genhtml_legend=1
00:31:58.747  		--rc geninfo_all_blocks=1
00:31:58.747  		--rc geninfo_unexecuted_blocks=1
00:31:58.747  		
00:31:58.747  		'
00:31:58.747    13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:31:58.747  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:31:58.747  		--rc genhtml_branch_coverage=1
00:31:58.747  		--rc genhtml_function_coverage=1
00:31:58.747  		--rc genhtml_legend=1
00:31:58.747  		--rc geninfo_all_blocks=1
00:31:58.747  		--rc geninfo_unexecuted_blocks=1
00:31:58.747  		
00:31:58.747  		'
00:31:58.747    13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:31:58.747  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:31:58.747  		--rc genhtml_branch_coverage=1
00:31:58.747  		--rc genhtml_function_coverage=1
00:31:58.747  		--rc genhtml_legend=1
00:31:58.747  		--rc geninfo_all_blocks=1
00:31:58.747  		--rc geninfo_unexecuted_blocks=1
00:31:58.747  		
00:31:58.747  		'
00:31:58.747    13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:31:58.747  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:31:58.747  		--rc genhtml_branch_coverage=1
00:31:58.748  		--rc genhtml_function_coverage=1
00:31:58.748  		--rc genhtml_legend=1
00:31:58.748  		--rc geninfo_all_blocks=1
00:31:58.748  		--rc geninfo_unexecuted_blocks=1
00:31:58.748  		
00:31:58.748  		'
00:31:58.748   13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh
00:31:58.748    13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob
00:31:58.748    13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:31:58.748    13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:31:58.748    13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:31:58.748     13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:31:58.748     13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:31:58.748     13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:31:58.748     13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH
00:31:58.748     13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:31:58.748   13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh
00:31:58.748     13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s
00:31:58.748    13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:31:58.748    13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:31:58.748    13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:31:58.748    13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:31:58.748    13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:31:58.748    13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:31:58.748    13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:31:58.748    13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:31:58.748    13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:31:58.748     13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:31:58.748    13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:31:58.748    13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e
00:31:58.748    13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:31:58.748    13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:31:58.748    13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:31:58.748    13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:31:58.748    13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh
00:31:58.748     13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob
00:31:58.748     13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:31:58.748     13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:31:58.748     13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:31:58.748      13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:31:58.748      13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:31:58.748      13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:31:58.748      13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH
00:31:58.748      13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:31:58.748    13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0
00:31:58.748    13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:31:58.748    13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:31:58.748    13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:31:58.748    13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:31:58.748    13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:31:58.748    13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:31:58.748  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:31:58.748    13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:31:58.748    13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:31:58.748    13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0
00:31:58.748   13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py
00:31:58.748   13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit
00:31:58.748   13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z rdma ']'
00:31:58.748   13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:31:58.748   13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs
00:31:58.748   13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no
00:31:58.748   13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns
00:31:58.748   13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:31:58.748   13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:31:58.748    13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:31:58.748   13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:31:58.748   13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:31:58.748   13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable
00:31:58.748   13:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x
00:32:05.306   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:32:05.306   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=()
00:32:05.306   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs
00:32:05.306   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=()
00:32:05.306   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:32:05.306   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=()
00:32:05.306   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers
00:32:05.306   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=()
00:32:05.306   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs
00:32:05.306   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=()
00:32:05.306   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810
00:32:05.306   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=()
00:32:05.306   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722
00:32:05.306   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=()
00:32:05.306   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx
00:32:05.306   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:32:05.306   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:32:05.306   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:32:05.306   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:32:05.306   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:32:05.306   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:32:05.306   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:32:05.306   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:32:05.306   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:32:05.306   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:32:05.306   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:32:05.306   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:32:05.306   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:32:05.306   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ rdma == rdma ]]
00:32:05.306   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}")
00:32:05.306   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}")
00:32:05.306   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]]
00:32:05.306   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}")
00:32:05.306   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:32:05.306   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:32:05.306   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)'
00:32:05.306  Found 0000:d9:00.0 (0x15b3 - 0x1015)
00:32:05.306   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:32:05.306   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:32:05.306   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:32:05.306   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:32:05.306   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:32:05.306   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:32:05.306   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:32:05.306   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)'
00:32:05.306  Found 0000:d9:00.1 (0x15b3 - 0x1015)
00:32:05.306   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:32:05.306   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:32:05.306   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:32:05.306   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:32:05.306   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:32:05.306   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:32:05.306   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:32:05.306   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]]
00:32:05.306   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:32:05.306   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:32:05.306   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:32:05.306   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:32:05.307   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:32:05.307   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0'
00:32:05.307  Found net devices under 0000:d9:00.0: mlx_0_0
00:32:05.307   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:32:05.307   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:32:05.307   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:32:05.307   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:32:05.307   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:32:05.307   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:32:05.307   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1'
00:32:05.307  Found net devices under 0000:d9:00.1: mlx_0_1
00:32:05.307   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:32:05.307   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:32:05.307   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes
00:32:05.307   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:32:05.307   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ rdma == tcp ]]
00:32:05.307   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@447 -- # [[ rdma == rdma ]]
00:32:05.307   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # rdma_device_init
00:32:05.307   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@529 -- # load_ib_rdma_modules
00:32:05.307    13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@62 -- # uname
00:32:05.307   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']'
00:32:05.307   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@66 -- # modprobe ib_cm
00:32:05.307   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@67 -- # modprobe ib_core
00:32:05.307   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@68 -- # modprobe ib_umad
00:32:05.307   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@69 -- # modprobe ib_uverbs
00:32:05.307   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@70 -- # modprobe iw_cm
00:32:05.307   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@71 -- # modprobe rdma_cm
00:32:05.307   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@72 -- # modprobe rdma_ucm
00:32:05.307   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@530 -- # allocate_nic_ips
00:32:05.307   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR ))
00:32:05.307    13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # get_rdma_if_list
00:32:05.307    13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:32:05.307    13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:32:05.307     13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:32:05.307     13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:32:05.307    13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:32:05.307    13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:32:05.307    13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:32:05.307    13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:32:05.307    13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_0
00:32:05.307    13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2
00:32:05.307    13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:32:05.307    13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:32:05.307    13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:32:05.307    13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:32:05.307    13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:32:05.307    13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_1
00:32:05.307    13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2
00:32:05.307   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:32:05.307    13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0
00:32:05.307    13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:32:05.307    13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:32:05.307    13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}'
00:32:05.307    13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1
00:32:05.307   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # ip=192.168.100.8
00:32:05.307   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]]
00:32:05.307   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_0
00:32:05.307  6: mlx_0_0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:32:05.307      link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff
00:32:05.307      altname enp217s0f0np0
00:32:05.307      altname ens818f0np0
00:32:05.307      inet 192.168.100.8/24 scope global mlx_0_0
00:32:05.307         valid_lft forever preferred_lft forever
00:32:05.307   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:32:05.307    13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1
00:32:05.307    13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:32:05.307    13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:32:05.307    13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}'
00:32:05.307    13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1
00:32:05.307   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # ip=192.168.100.9
00:32:05.307   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]]
00:32:05.307   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_1
00:32:05.307  7: mlx_0_1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:32:05.307      link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff
00:32:05.307      altname enp217s0f1np1
00:32:05.307      altname ens818f1np1
00:32:05.307      inet 192.168.100.9/24 scope global mlx_0_1
00:32:05.307         valid_lft forever preferred_lft forever
00:32:05.307   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0
00:32:05.307   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:32:05.307   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma'
00:32:05.307   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]]
00:32:05.307    13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@484 -- # get_available_rdma_ips
00:32:05.307     13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # get_rdma_if_list
00:32:05.307     13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:32:05.307     13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:32:05.307      13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:32:05.307      13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:32:05.307     13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:32:05.307     13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:32:05.307     13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:32:05.307     13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:32:05.307     13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_0
00:32:05.307     13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2
00:32:05.307     13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:32:05.307     13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:32:05.307     13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:32:05.307     13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:32:05.307     13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:32:05.307     13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_1
00:32:05.307     13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2
00:32:05.307    13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:32:05.307    13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0
00:32:05.307    13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:32:05.307    13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:32:05.307    13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}'
00:32:05.307    13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1
00:32:05.307    13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:32:05.307    13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1
00:32:05.307    13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:32:05.307    13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:32:05.307    13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}'
00:32:05.307    13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1
00:32:05.307   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8
00:32:05.307  192.168.100.9'
00:32:05.307    13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # echo '192.168.100.8
00:32:05.307  192.168.100.9'
00:32:05.307    13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # head -n 1
00:32:05.307   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8
00:32:05.307    13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # head -n 1
00:32:05.307    13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # echo '192.168.100.8
00:32:05.307  192.168.100.9'
00:32:05.307    13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # tail -n +2
00:32:05.307   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9
00:32:05.307   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']'
00:32:05.307   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024'
00:32:05.307   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' rdma == tcp ']'
00:32:05.307   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' rdma == rdma ']'
00:32:05.307   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-rdma
00:32:05.307   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]]
00:32:05.307   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt
00:32:05.307   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable
00:32:05.307   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x
00:32:05.307   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:32:05.307   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3486226
00:32:05.307   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:32:05.307   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3486226
00:32:05.307   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 3486226 ']'
00:32:05.307   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:32:05.307   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100
00:32:05.308   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:32:05.308  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:32:05.308   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable
00:32:05.308   13:58:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x
00:32:05.308  [2024-12-14 13:58:04.747322] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:32:05.308  [2024-12-14 13:58:04.747420] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:32:05.308  [2024-12-14 13:58:04.880003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:32:05.308  [2024-12-14 13:58:04.985313] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:32:05.308  [2024-12-14 13:58:04.985358] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:32:05.308  [2024-12-14 13:58:04.985371] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:32:05.308  [2024-12-14 13:58:04.985385] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:32:05.308  [2024-12-14 13:58:04.985395] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:32:05.308  [2024-12-14 13:58:04.988266] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:32:05.308  [2024-12-14 13:58:04.988300] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:32:05.308  [2024-12-14 13:58:04.988373] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:32:05.308  [2024-12-14 13:58:04.988379] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:32:05.877   13:58:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:32:05.877   13:58:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0
00:32:05.877   13:58:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192
00:32:06.135  [2024-12-14 13:58:05.778343] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028540/0x7f517edbd940) succeed.
00:32:06.135  [2024-12-14 13:58:05.788051] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000286c0/0x7f517ed79940) succeed.
00:32:06.393   13:58:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt
00:32:06.393   13:58:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable
00:32:06.393   13:58:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x
00:32:06.393   13:58:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1
00:32:06.651  Malloc1
00:32:06.651   13:58:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:32:06.909   13:58:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1
00:32:07.167   13:58:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420
00:32:07.425  [2024-12-14 13:58:06.977704] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 ***
00:32:07.425   13:58:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420
00:32:07.683   13:58:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme
00:32:07.683   13:58:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096
00:32:07.683   13:58:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096
00:32:07.683   13:58:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:32:07.683   13:58:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:32:07.683   13:58:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers
00:32:07.683   13:58:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme
00:32:07.683   13:58:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift
00:32:07.683   13:58:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib=
00:32:07.683   13:58:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:32:07.683    13:58:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme
00:32:07.683    13:58:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan
00:32:07.683    13:58:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:32:07.683   13:58:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8
00:32:07.683   13:58:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]]
00:32:07.683   13:58:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break
00:32:07.683   13:58:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme'
00:32:07.683   13:58:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096
00:32:07.941  test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128
00:32:07.941  fio-3.35
00:32:07.941  Starting 1 thread
00:32:10.470  
00:32:10.470  test: (groupid=0, jobs=1): err= 0: pid=3486908: Sat Dec 14 13:58:10 2024
00:32:10.470    read: IOPS=15.3k, BW=59.8MiB/s (62.7MB/s)(120MiB/2004msec)
00:32:10.470      slat (nsec): min=1484, max=35497, avg=1686.86, stdev=661.54
00:32:10.470      clat (usec): min=3163, max=7497, avg=4158.82, stdev=127.10
00:32:10.470       lat (usec): min=3188, max=7499, avg=4160.51, stdev=127.09
00:32:10.470      clat percentiles (usec):
00:32:10.470       |  1.00th=[ 3752],  5.00th=[ 4113], 10.00th=[ 4113], 20.00th=[ 4146],
00:32:10.470       | 30.00th=[ 4146], 40.00th=[ 4146], 50.00th=[ 4146], 60.00th=[ 4178],
00:32:10.470       | 70.00th=[ 4178], 80.00th=[ 4178], 90.00th=[ 4178], 95.00th=[ 4228],
00:32:10.470       | 99.00th=[ 4555], 99.50th=[ 4555], 99.90th=[ 5932], 99.95th=[ 6456],
00:32:10.470       | 99.99th=[ 6980]
00:32:10.470     bw (  KiB/s): min=60136, max=62200, per=100.00%, avg=61234.00, stdev=1003.11, samples=4
00:32:10.470     iops        : min=15034, max=15550, avg=15308.50, stdev=250.78, samples=4
00:32:10.470    write: IOPS=15.3k, BW=59.9MiB/s (62.8MB/s)(120MiB/2004msec); 0 zone resets
00:32:10.470      slat (nsec): min=1530, max=19522, avg=1770.49, stdev=602.06
00:32:10.470      clat (usec): min=3202, max=7520, avg=4157.47, stdev=131.26
00:32:10.470       lat (usec): min=3220, max=7522, avg=4159.24, stdev=131.25
00:32:10.470      clat percentiles (usec):
00:32:10.470       |  1.00th=[ 3752],  5.00th=[ 4113], 10.00th=[ 4113], 20.00th=[ 4146],
00:32:10.470       | 30.00th=[ 4146], 40.00th=[ 4146], 50.00th=[ 4146], 60.00th=[ 4178],
00:32:10.470       | 70.00th=[ 4178], 80.00th=[ 4178], 90.00th=[ 4178], 95.00th=[ 4228],
00:32:10.470       | 99.00th=[ 4555], 99.50th=[ 4555], 99.90th=[ 5866], 99.95th=[ 6980],
00:32:10.470       | 99.99th=[ 7504]
00:32:10.470     bw (  KiB/s): min=60568, max=62376, per=99.98%, avg=61328.00, stdev=763.79, samples=4
00:32:10.470     iops        : min=15142, max=15594, avg=15332.00, stdev=190.95, samples=4
00:32:10.470    lat (msec)   : 4=2.53%, 10=97.47%
00:32:10.470    cpu          : usr=99.15%, sys=0.45%, ctx=15, majf=0, minf=1292
00:32:10.470    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9%
00:32:10.470       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:32:10.470       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:32:10.470       issued rwts: total=30677,30731,0,0 short=0,0,0,0 dropped=0,0,0,0
00:32:10.470       latency   : target=0, window=0, percentile=100.00%, depth=128
00:32:10.470  
00:32:10.470  Run status group 0 (all jobs):
00:32:10.470     READ: bw=59.8MiB/s (62.7MB/s), 59.8MiB/s-59.8MiB/s (62.7MB/s-62.7MB/s), io=120MiB (126MB), run=2004-2004msec
00:32:10.470    WRITE: bw=59.9MiB/s (62.8MB/s), 59.9MiB/s-59.9MiB/s (62.8MB/s-62.8MB/s), io=120MiB (126MB), run=2004-2004msec
00:32:10.728  -----------------------------------------------------
00:32:10.728  Suppressions used:
00:32:10.728    count      bytes template
00:32:10.728        1         63 /usr/src/fio/parse.c
00:32:10.728        1          8 libtcmalloc_minimal.so
00:32:10.728  -----------------------------------------------------
00:32:10.728  
00:32:10.728   13:58:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1'
00:32:10.728   13:58:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1'
00:32:10.728   13:58:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:32:10.728   13:58:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:32:10.728   13:58:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers
00:32:10.728   13:58:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme
00:32:10.728   13:58:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift
00:32:10.728   13:58:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib=
00:32:10.728   13:58:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:32:10.728    13:58:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme
00:32:10.728    13:58:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan
00:32:10.728    13:58:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:32:10.728   13:58:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8
00:32:10.728   13:58:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]]
00:32:10.728   13:58:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break
00:32:10.728   13:58:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme'
00:32:10.728   13:58:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1'
00:32:11.293  test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128
00:32:11.293  fio-3.35
00:32:11.293  Starting 1 thread
00:32:13.822  
00:32:13.822  test: (groupid=0, jobs=1): err= 0: pid=3487565: Sat Dec 14 13:58:13 2024
00:32:13.822    read: IOPS=12.3k, BW=192MiB/s (201MB/s)(378MiB/1972msec)
00:32:13.822      slat (nsec): min=2467, max=50516, avg=2963.09, stdev=1386.69
00:32:13.822      clat (usec): min=511, max=10046, avg=1952.88, stdev=1604.96
00:32:13.822       lat (usec): min=514, max=10051, avg=1955.84, stdev=1605.45
00:32:13.822      clat percentiles (usec):
00:32:13.822       |  1.00th=[  799],  5.00th=[  914], 10.00th=[  979], 20.00th=[ 1074],
00:32:13.822       | 30.00th=[ 1156], 40.00th=[ 1237], 50.00th=[ 1385], 60.00th=[ 1516],
00:32:13.822       | 70.00th=[ 1663], 80.00th=[ 1860], 90.00th=[ 5342], 95.00th=[ 5866],
00:32:13.822       | 99.00th=[ 7635], 99.50th=[ 8225], 99.90th=[ 9110], 99.95th=[ 9634],
00:32:13.822       | 99.99th=[10028]
00:32:13.822     bw (  KiB/s): min=93184, max=97568, per=48.55%, avg=95272.00, stdev=1794.55, samples=4
00:32:13.822     iops        : min= 5824, max= 6098, avg=5954.50, stdev=112.16, samples=4
00:32:13.822    write: IOPS=6957, BW=109MiB/s (114MB/s)(194MiB/1783msec); 0 zone resets
00:32:13.822      slat (usec): min=26, max=110, avg=29.19, stdev= 3.14
00:32:13.822      clat (usec): min=5135, max=22865, avg=14941.09, stdev=2240.34
00:32:13.822       lat (usec): min=5162, max=22895, avg=14970.29, stdev=2240.10
00:32:13.822      clat percentiles (usec):
00:32:13.822       |  1.00th=[ 8848],  5.00th=[11600], 10.00th=[12387], 20.00th=[13304],
00:32:13.822       | 30.00th=[13829], 40.00th=[14353], 50.00th=[14746], 60.00th=[15270],
00:32:13.822       | 70.00th=[15926], 80.00th=[16712], 90.00th=[17695], 95.00th=[18744],
00:32:13.822       | 99.00th=[20841], 99.50th=[21365], 99.90th=[22152], 99.95th=[22414],
00:32:13.822       | 99.99th=[22938]
00:32:13.822     bw (  KiB/s): min=97280, max=99936, per=88.32%, avg=98320.00, stdev=1155.70, samples=4
00:32:13.822     iops        : min= 6080, max= 6246, avg=6145.00, stdev=72.23, samples=4
00:32:13.822    lat (usec)   : 750=0.21%, 1000=7.66%
00:32:13.822    lat (msec)   : 2=46.91%, 4=2.91%, 10=8.92%, 20=32.59%, 50=0.79%
00:32:13.822    cpu          : usr=96.06%, sys=2.24%, ctx=187, majf=0, minf=10327
00:32:13.822    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5%
00:32:13.822       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:32:13.822       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:32:13.822       issued rwts: total=24185,12406,0,0 short=0,0,0,0 dropped=0,0,0,0
00:32:13.822       latency   : target=0, window=0, percentile=100.00%, depth=128
00:32:13.822  
00:32:13.822  Run status group 0 (all jobs):
00:32:13.822     READ: bw=192MiB/s (201MB/s), 192MiB/s-192MiB/s (201MB/s-201MB/s), io=378MiB (396MB), run=1972-1972msec
00:32:13.822    WRITE: bw=109MiB/s (114MB/s), 109MiB/s-109MiB/s (114MB/s-114MB/s), io=194MiB (203MB), run=1783-1783msec
00:32:13.822  -----------------------------------------------------
00:32:13.822  Suppressions used:
00:32:13.822    count      bytes template
00:32:13.822        1         63 /usr/src/fio/parse.c
00:32:13.822      141      13536 /usr/src/fio/iolog.c
00:32:13.822        1          8 libtcmalloc_minimal.so
00:32:13.822  -----------------------------------------------------
00:32:13.822  
00:32:13.822   13:58:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:32:14.080   13:58:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']'
00:32:14.080   13:58:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs))
00:32:14.080    13:58:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs
00:32:14.080    13:58:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # bdfs=()
00:32:14.080    13:58:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # local bdfs
00:32:14.080    13:58:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:32:14.080     13:58:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh
00:32:14.080     13:58:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:32:14.080    13:58:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1500 -- # (( 1 == 0 ))
00:32:14.080    13:58:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0
00:32:14.080   13:58:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:d8:00.0 -i 192.168.100.8
00:32:17.361  Nvme0n1
00:32:17.361    13:58:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0
00:32:22.622   13:58:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=a5ac65ea-1b77-4cd2-9b0a-a32c3d9d152e
00:32:22.622   13:58:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb a5ac65ea-1b77-4cd2-9b0a-a32c3d9d152e
00:32:22.622   13:58:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=a5ac65ea-1b77-4cd2-9b0a-a32c3d9d152e
00:32:22.622   13:58:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info
00:32:22.622   13:58:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc
00:32:22.622   13:58:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs
00:32:22.622    13:58:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores
00:32:22.880   13:58:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[
00:32:22.880    {
00:32:22.880      "uuid": "a5ac65ea-1b77-4cd2-9b0a-a32c3d9d152e",
00:32:22.880      "name": "lvs_0",
00:32:22.880      "base_bdev": "Nvme0n1",
00:32:22.880      "total_data_clusters": 1862,
00:32:22.880      "free_clusters": 1862,
00:32:22.880      "block_size": 512,
00:32:22.880      "cluster_size": 1073741824
00:32:22.880    }
00:32:22.880  ]'
00:32:22.880    13:58:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="a5ac65ea-1b77-4cd2-9b0a-a32c3d9d152e") .free_clusters'
00:32:22.880   13:58:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=1862
00:32:22.880    13:58:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="a5ac65ea-1b77-4cd2-9b0a-a32c3d9d152e") .cluster_size'
00:32:23.138   13:58:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=1073741824
00:32:23.138   13:58:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=1906688
00:32:23.138   13:58:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 1906688
00:32:23.138  1906688
00:32:23.138   13:58:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 1906688
00:32:23.704  91f30b27-bc6c-4b89-98c0-65a5f0e0d9c0
00:32:23.704   13:58:23 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001
00:32:23.704   13:58:23 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0
00:32:23.961   13:58:23 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420
00:32:24.219   13:58:23 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 	traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096
00:32:24.219   13:58:23 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 	traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096
00:32:24.219   13:58:23 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:32:24.219   13:58:23 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:32:24.219   13:58:23 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers
00:32:24.219   13:58:23 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme
00:32:24.219   13:58:23 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift
00:32:24.219   13:58:23 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib=
00:32:24.219   13:58:23 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:32:24.219    13:58:23 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme
00:32:24.219    13:58:23 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan
00:32:24.219    13:58:23 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:32:24.219   13:58:23 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8
00:32:24.219   13:58:23 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]]
00:32:24.219   13:58:23 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break
00:32:24.219   13:58:23 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme'
00:32:24.219   13:58:23 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 	traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096
00:32:24.477  test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128
00:32:24.477  fio-3.35
00:32:24.477  Starting 1 thread
00:32:27.006  
00:32:27.006  test: (groupid=0, jobs=1): err= 0: pid=3489980: Sat Dec 14 13:58:26 2024
00:32:27.006    read: IOPS=8750, BW=34.2MiB/s (35.8MB/s)(68.5MiB/2005msec)
00:32:27.006      slat (nsec): min=1534, max=25060, avg=1758.38, stdev=509.66
00:32:27.006      clat (usec): min=210, max=332874, avg=7251.00, stdev=19777.27
00:32:27.006       lat (usec): min=212, max=332878, avg=7252.75, stdev=19777.33
00:32:27.006      clat percentiles (msec):
00:32:27.006       |  1.00th=[    6],  5.00th=[    6], 10.00th=[    6], 20.00th=[    6],
00:32:27.006       | 30.00th=[    6], 40.00th=[    7], 50.00th=[    7], 60.00th=[    7],
00:32:27.006       | 70.00th=[    7], 80.00th=[    7], 90.00th=[    7], 95.00th=[    7],
00:32:27.006       | 99.00th=[    7], 99.50th=[    9], 99.90th=[  334], 99.95th=[  334],
00:32:27.006       | 99.99th=[  334]
00:32:27.006     bw (  KiB/s): min=13072, max=42568, per=99.91%, avg=34972.00, stdev=14602.38, samples=4
00:32:27.006     iops        : min= 3268, max=10642, avg=8743.00, stdev=3650.59, samples=4
00:32:27.006    write: IOPS=8749, BW=34.2MiB/s (35.8MB/s)(68.5MiB/2005msec); 0 zone resets
00:32:27.006      slat (nsec): min=1562, max=18199, avg=1835.88, stdev=378.71
00:32:27.006      clat (usec): min=169, max=333306, avg=7219.30, stdev=19251.51
00:32:27.006       lat (usec): min=170, max=333313, avg=7221.14, stdev=19251.62
00:32:27.006      clat percentiles (msec):
00:32:27.006       |  1.00th=[    6],  5.00th=[    6], 10.00th=[    6], 20.00th=[    7],
00:32:27.006       | 30.00th=[    7], 40.00th=[    7], 50.00th=[    7], 60.00th=[    7],
00:32:27.006       | 70.00th=[    7], 80.00th=[    7], 90.00th=[    7], 95.00th=[    7],
00:32:27.006       | 99.00th=[    7], 99.50th=[   10], 99.90th=[  334], 99.95th=[  334],
00:32:27.006       | 99.99th=[  334]
00:32:27.006     bw (  KiB/s): min=13528, max=42192, per=99.86%, avg=34948.00, stdev=14280.55, samples=4
00:32:27.006     iops        : min= 3382, max=10548, avg=8737.00, stdev=3570.14, samples=4
00:32:27.006    lat (usec)   : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
00:32:27.006    lat (msec)   : 2=0.03%, 4=0.21%, 10=99.32%, 20=0.04%, 500=0.36%
00:32:27.006    cpu          : usr=99.45%, sys=0.15%, ctx=15, majf=0, minf=1780
00:32:27.006    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8%
00:32:27.006       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:32:27.006       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:32:27.006       issued rwts: total=17545,17542,0,0 short=0,0,0,0 dropped=0,0,0,0
00:32:27.006       latency   : target=0, window=0, percentile=100.00%, depth=128
00:32:27.006  
00:32:27.006  Run status group 0 (all jobs):
00:32:27.006     READ: bw=34.2MiB/s (35.8MB/s), 34.2MiB/s-34.2MiB/s (35.8MB/s-35.8MB/s), io=68.5MiB (71.9MB), run=2005-2005msec
00:32:27.006    WRITE: bw=34.2MiB/s (35.8MB/s), 34.2MiB/s-34.2MiB/s (35.8MB/s-35.8MB/s), io=68.5MiB (71.9MB), run=2005-2005msec
00:32:27.264  -----------------------------------------------------
00:32:27.264  Suppressions used:
00:32:27.264    count      bytes template
00:32:27.264        1         64 /usr/src/fio/parse.c
00:32:27.264        1          8 libtcmalloc_minimal.so
00:32:27.264  -----------------------------------------------------
00:32:27.264  
00:32:27.264   13:58:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2
00:32:27.522    13:58:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0
00:32:28.896   13:58:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=f01ba502-d59f-4f5e-a32b-62bc7a9fda35
00:32:28.896   13:58:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb f01ba502-d59f-4f5e-a32b-62bc7a9fda35
00:32:28.896   13:58:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=f01ba502-d59f-4f5e-a32b-62bc7a9fda35
00:32:28.896   13:58:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info
00:32:28.896   13:58:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc
00:32:28.896   13:58:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs
00:32:28.896    13:58:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores
00:32:28.896   13:58:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[
00:32:28.896    {
00:32:28.896      "uuid": "a5ac65ea-1b77-4cd2-9b0a-a32c3d9d152e",
00:32:28.896      "name": "lvs_0",
00:32:28.896      "base_bdev": "Nvme0n1",
00:32:28.896      "total_data_clusters": 1862,
00:32:28.896      "free_clusters": 0,
00:32:28.896      "block_size": 512,
00:32:28.896      "cluster_size": 1073741824
00:32:28.896    },
00:32:28.896    {
00:32:28.896      "uuid": "f01ba502-d59f-4f5e-a32b-62bc7a9fda35",
00:32:28.896      "name": "lvs_n_0",
00:32:28.896      "base_bdev": "91f30b27-bc6c-4b89-98c0-65a5f0e0d9c0",
00:32:28.896      "total_data_clusters": 476206,
00:32:28.896      "free_clusters": 476206,
00:32:28.896      "block_size": 512,
00:32:28.896      "cluster_size": 4194304
00:32:28.896    }
00:32:28.896  ]'
00:32:28.896    13:58:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="f01ba502-d59f-4f5e-a32b-62bc7a9fda35") .free_clusters'
00:32:28.896   13:58:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=476206
00:32:28.896    13:58:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="f01ba502-d59f-4f5e-a32b-62bc7a9fda35") .cluster_size'
00:32:28.896   13:58:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=4194304
00:32:28.896   13:58:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=1904824
00:32:28.896   13:58:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 1904824
00:32:28.896  1904824
00:32:28.896   13:58:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 1904824
00:32:31.429  735bd703-0e32-48bb-a543-baef7c1cc5af
00:32:31.429   13:58:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001
00:32:31.701   13:58:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0
00:32:31.968   13:58:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420
00:32:31.968   13:58:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 	traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096
00:32:31.968   13:58:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 	traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096
00:32:31.968   13:58:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:32:31.968   13:58:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:32:31.968   13:58:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers
00:32:31.968   13:58:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme
00:32:31.968   13:58:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift
00:32:31.968   13:58:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib=
00:32:31.968   13:58:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:32:31.968    13:58:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan
00:32:31.968    13:58:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:32:31.968    13:58:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme
00:32:32.272   13:58:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8
00:32:32.272   13:58:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]]
00:32:32.272   13:58:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break
00:32:32.272   13:58:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme'
00:32:32.272   13:58:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 	traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096
00:32:32.530  test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128
00:32:32.530  fio-3.35
00:32:32.530  Starting 1 thread
00:32:35.060  
00:32:35.060  test: (groupid=0, jobs=1): err= 0: pid=3491384: Sat Dec 14 13:58:34 2024
00:32:35.060    read: IOPS=8926, BW=34.9MiB/s (36.6MB/s)(69.9MiB/2006msec)
00:32:35.060      slat (nsec): min=1480, max=31493, avg=1651.45, stdev=388.38
00:32:35.060      clat (usec): min=4536, max=12090, avg=7071.22, stdev=246.97
00:32:35.060       lat (usec): min=4540, max=12091, avg=7072.87, stdev=246.93
00:32:35.060      clat percentiles (usec):
00:32:35.060       |  1.00th=[ 6915],  5.00th=[ 6980], 10.00th=[ 6980], 20.00th=[ 6980],
00:32:35.060       | 30.00th=[ 7046], 40.00th=[ 7046], 50.00th=[ 7046], 60.00th=[ 7046],
00:32:35.060       | 70.00th=[ 7046], 80.00th=[ 7111], 90.00th=[ 7111], 95.00th=[ 7177],
00:32:35.060       | 99.00th=[ 8225], 99.50th=[ 8455], 99.90th=[10290], 99.95th=[10421],
00:32:35.060       | 99.99th=[11994]
00:32:35.060     bw (  KiB/s): min=34024, max=36640, per=99.98%, avg=35700.00, stdev=1153.30, samples=4
00:32:35.060     iops        : min= 8506, max= 9160, avg=8925.00, stdev=288.33, samples=4
00:32:35.060    write: IOPS=8944, BW=34.9MiB/s (36.6MB/s)(70.1MiB/2006msec); 0 zone resets
00:32:35.060      slat (nsec): min=1521, max=17303, avg=1749.61, stdev=334.97
00:32:35.060      clat (usec): min=4573, max=12179, avg=7097.09, stdev=262.54
00:32:35.060       lat (usec): min=4577, max=12184, avg=7098.83, stdev=262.52
00:32:35.060      clat percentiles (usec):
00:32:35.060       |  1.00th=[ 6915],  5.00th=[ 6980], 10.00th=[ 7046], 20.00th=[ 7046],
00:32:35.060       | 30.00th=[ 7046], 40.00th=[ 7046], 50.00th=[ 7046], 60.00th=[ 7111],
00:32:35.060       | 70.00th=[ 7111], 80.00th=[ 7111], 90.00th=[ 7111], 95.00th=[ 7242],
00:32:35.060       | 99.00th=[ 8291], 99.50th=[ 8455], 99.90th=[10421], 99.95th=[12125],
00:32:35.060       | 99.99th=[12125]
00:32:35.060     bw (  KiB/s): min=34848, max=36264, per=99.91%, avg=35744.00, stdev=625.74, samples=4
00:32:35.060     iops        : min= 8712, max= 9066, avg=8936.00, stdev=156.44, samples=4
00:32:35.060    lat (msec)   : 10=99.85%, 20=0.15%
00:32:35.060    cpu          : usr=99.20%, sys=0.35%, ctx=16, majf=0, minf=1768
00:32:35.060    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8%
00:32:35.060       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:32:35.060       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:32:35.060       issued rwts: total=17907,17942,0,0 short=0,0,0,0 dropped=0,0,0,0
00:32:35.060       latency   : target=0, window=0, percentile=100.00%, depth=128
00:32:35.060  
00:32:35.060  Run status group 0 (all jobs):
00:32:35.060     READ: bw=34.9MiB/s (36.6MB/s), 34.9MiB/s-34.9MiB/s (36.6MB/s-36.6MB/s), io=69.9MiB (73.3MB), run=2006-2006msec
00:32:35.060    WRITE: bw=34.9MiB/s (36.6MB/s), 34.9MiB/s-34.9MiB/s (36.6MB/s-36.6MB/s), io=70.1MiB (73.5MB), run=2006-2006msec
00:32:35.060  -----------------------------------------------------
00:32:35.060  Suppressions used:
00:32:35.060    count      bytes template
00:32:35.060        1         64 /usr/src/fio/parse.c
00:32:35.060        1          8 libtcmalloc_minimal.so
00:32:35.060  -----------------------------------------------------
00:32:35.060  
00:32:35.318   13:58:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3
00:32:35.576   13:58:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync
00:32:35.576   13:58:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0
00:32:45.547   13:58:43 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0
00:32:45.547   13:58:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0
00:32:50.812   13:58:49 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0
00:32:50.812   13:58:49 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0
00:32:53.343   13:58:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT
00:32:53.343   13:58:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state
00:32:53.343   13:58:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini
00:32:53.343   13:58:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup
00:32:53.343   13:58:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync
00:32:53.343   13:58:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' rdma == tcp ']'
00:32:53.343   13:58:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' rdma == rdma ']'
00:32:53.343   13:58:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e
00:32:53.343   13:58:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20}
00:32:53.343   13:58:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma
00:32:53.602  rmmod nvme_rdma
00:32:53.602  rmmod nvme_fabrics
00:32:53.602   13:58:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:32:53.602   13:58:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e
00:32:53.602   13:58:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0
00:32:53.602   13:58:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 3486226 ']'
00:32:53.602   13:58:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 3486226
00:32:53.602   13:58:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 3486226 ']'
00:32:53.602   13:58:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 3486226
00:32:53.602    13:58:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname
00:32:53.602   13:58:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:32:53.602    13:58:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3486226
00:32:53.602   13:58:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:32:53.602   13:58:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:32:53.602   13:58:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3486226'
00:32:53.602  killing process with pid 3486226
00:32:53.602   13:58:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 3486226
00:32:53.602   13:58:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 3486226
00:32:55.504   13:58:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:32:55.504   13:58:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]]
00:32:55.504  
00:32:55.504  real	0m57.167s
00:32:55.504  user	4m3.743s
00:32:55.504  sys	0m11.501s
00:32:55.504   13:58:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable
00:32:55.504   13:58:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x
00:32:55.504  ************************************
00:32:55.504  END TEST nvmf_fio_host
00:32:55.504  ************************************
00:32:55.504   13:58:55 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma
00:32:55.504   13:58:55 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:32:55.504   13:58:55 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable
00:32:55.504   13:58:55 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:32:55.504  ************************************
00:32:55.504  START TEST nvmf_failover
00:32:55.504  ************************************
00:32:55.504   13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma
00:32:55.504  * Looking for test storage...
00:32:55.504  * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host
00:32:55.504    13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:32:55.504     13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version
00:32:55.504     13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:32:55.504    13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:32:55.504    13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:32:55.504    13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l
00:32:55.504    13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l
00:32:55.504    13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-:
00:32:55.763    13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1
00:32:55.763    13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-:
00:32:55.763    13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2
00:32:55.763    13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<'
00:32:55.763    13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2
00:32:55.763    13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1
00:32:55.763    13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:32:55.763    13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in
00:32:55.763    13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1
00:32:55.763    13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 ))
00:32:55.763    13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:32:55.763     13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1
00:32:55.763     13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1
00:32:55.763     13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:32:55.763     13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1
00:32:55.763    13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1
00:32:55.763     13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2
00:32:55.763     13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2
00:32:55.763     13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:32:55.763     13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2
00:32:55.763    13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2
00:32:55.763    13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:32:55.763    13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:32:55.763    13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0
00:32:55.763    13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:32:55.763    13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:32:55.763  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:32:55.763  		--rc genhtml_branch_coverage=1
00:32:55.763  		--rc genhtml_function_coverage=1
00:32:55.763  		--rc genhtml_legend=1
00:32:55.763  		--rc geninfo_all_blocks=1
00:32:55.763  		--rc geninfo_unexecuted_blocks=1
00:32:55.763  		
00:32:55.763  		'
00:32:55.763    13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:32:55.763  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:32:55.763  		--rc genhtml_branch_coverage=1
00:32:55.763  		--rc genhtml_function_coverage=1
00:32:55.763  		--rc genhtml_legend=1
00:32:55.763  		--rc geninfo_all_blocks=1
00:32:55.763  		--rc geninfo_unexecuted_blocks=1
00:32:55.763  		
00:32:55.763  		'
00:32:55.763    13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:32:55.763  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:32:55.763  		--rc genhtml_branch_coverage=1
00:32:55.763  		--rc genhtml_function_coverage=1
00:32:55.763  		--rc genhtml_legend=1
00:32:55.763  		--rc geninfo_all_blocks=1
00:32:55.763  		--rc geninfo_unexecuted_blocks=1
00:32:55.763  		
00:32:55.763  		'
00:32:55.763    13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:32:55.763  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:32:55.763  		--rc genhtml_branch_coverage=1
00:32:55.763  		--rc genhtml_function_coverage=1
00:32:55.763  		--rc genhtml_legend=1
00:32:55.763  		--rc geninfo_all_blocks=1
00:32:55.763  		--rc geninfo_unexecuted_blocks=1
00:32:55.763  		
00:32:55.763  		'
00:32:55.763   13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh
00:32:55.763     13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s
00:32:55.764    13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:32:55.764    13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:32:55.764    13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:32:55.764    13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:32:55.764    13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:32:55.764    13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:32:55.764    13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:32:55.764    13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:32:55.764    13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:32:55.764     13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:32:55.764    13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:32:55.764    13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e
00:32:55.764    13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:32:55.764    13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:32:55.764    13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:32:55.764    13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:32:55.764    13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh
00:32:55.764     13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob
00:32:55.764     13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:32:55.764     13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:32:55.764     13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:32:55.764      13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:32:55.764      13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:32:55.764      13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:32:55.764      13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH
00:32:55.764      13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:32:55.764    13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0
00:32:55.764    13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:32:55.764    13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:32:55.764    13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:32:55.764    13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:32:55.764    13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:32:55.764    13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:32:55.764  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:32:55.764    13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:32:55.764    13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:32:55.764    13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0
00:32:55.764   13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64
00:32:55.764   13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:32:55.764   13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py
00:32:55.764   13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock
00:32:55.764   13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit
00:32:55.764   13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z rdma ']'
00:32:55.764   13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:32:55.764   13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs
00:32:55.764   13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no
00:32:55.764   13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns
00:32:55.764   13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:32:55.764   13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:32:55.764    13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:32:55.764   13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:32:55.764   13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:32:55.764   13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable
00:32:55.764   13:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=()
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=()
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=()
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=()
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=()
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=()
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=()
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ rdma == rdma ]]
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}")
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}")
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]]
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}")
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)'
00:33:02.329  Found 0000:d9:00.0 (0x15b3 - 0x1015)
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)'
00:33:02.329  Found 0000:d9:00.1 (0x15b3 - 0x1015)
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]]
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0'
00:33:02.329  Found net devices under 0000:d9:00.0: mlx_0_0
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1'
00:33:02.329  Found net devices under 0000:d9:00.1: mlx_0_1
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ rdma == tcp ]]
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@447 -- # [[ rdma == rdma ]]
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # rdma_device_init
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@529 -- # load_ib_rdma_modules
00:33:02.329    13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@62 -- # uname
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']'
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@66 -- # modprobe ib_cm
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@67 -- # modprobe ib_core
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@68 -- # modprobe ib_umad
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@69 -- # modprobe ib_uverbs
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@70 -- # modprobe iw_cm
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@71 -- # modprobe rdma_cm
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@72 -- # modprobe rdma_ucm
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@530 -- # allocate_nic_ips
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR ))
00:33:02.329    13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # get_rdma_if_list
00:33:02.329    13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:33:02.329    13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:33:02.329     13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:33:02.329     13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:33:02.329    13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:33:02.329    13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:33:02.329    13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:33:02.329    13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:33:02.329    13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_0
00:33:02.329    13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2
00:33:02.329    13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:33:02.329    13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:33:02.329    13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:33:02.329    13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:33:02.329    13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:33:02.329    13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_1
00:33:02.329    13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:33:02.329    13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0
00:33:02.329    13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:33:02.329    13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:33:02.329    13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}'
00:33:02.329    13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # ip=192.168.100.8
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]]
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@85 -- # ip addr show mlx_0_0
00:33:02.329  6: mlx_0_0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:33:02.329      link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff
00:33:02.329      altname enp217s0f0np0
00:33:02.329      altname ens818f0np0
00:33:02.329      inet 192.168.100.8/24 scope global mlx_0_0
00:33:02.329         valid_lft forever preferred_lft forever
00:33:02.329   13:59:01 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:33:02.329    13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1
00:33:02.329    13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:33:02.329    13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:33:02.329    13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}'
00:33:02.329    13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1
00:33:02.329   13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # ip=192.168.100.9
00:33:02.329   13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]]
00:33:02.329   13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@85 -- # ip addr show mlx_0_1
00:33:02.329  7: mlx_0_1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:33:02.329      link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff
00:33:02.329      altname enp217s0f1np1
00:33:02.329      altname ens818f1np1
00:33:02.329      inet 192.168.100.9/24 scope global mlx_0_1
00:33:02.329         valid_lft forever preferred_lft forever
00:33:02.329   13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0
00:33:02.329   13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:33:02.329   13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma'
00:33:02.329   13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]]
00:33:02.329    13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # get_available_rdma_ips
00:33:02.329     13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # get_rdma_if_list
00:33:02.329     13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:33:02.329     13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:33:02.329      13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:33:02.329      13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:33:02.329     13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:33:02.329     13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:33:02.329     13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:33:02.329     13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:33:02.329     13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_0
00:33:02.329     13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2
00:33:02.329     13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:33:02.329     13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:33:02.329     13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:33:02.329     13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:33:02.329     13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:33:02.329     13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_1
00:33:02.329     13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2
00:33:02.329    13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:33:02.329    13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0
00:33:02.329    13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:33:02.329    13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:33:02.329    13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}'
00:33:02.329    13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1
00:33:02.587    13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:33:02.587    13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1
00:33:02.587    13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:33:02.587    13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:33:02.587    13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}'
00:33:02.587    13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1
00:33:02.587   13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8
00:33:02.587  192.168.100.9'
00:33:02.587    13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # echo '192.168.100.8
00:33:02.587  192.168.100.9'
00:33:02.587    13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # head -n 1
00:33:02.587   13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8
00:33:02.587    13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # echo '192.168.100.8
00:33:02.587  192.168.100.9'
00:33:02.587    13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # tail -n +2
00:33:02.587    13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # head -n 1
00:33:02.588   13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9
00:33:02.588   13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']'
00:33:02.588   13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024'
00:33:02.588   13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' rdma == tcp ']'
00:33:02.588   13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' rdma == rdma ']'
00:33:02.588   13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-rdma
00:33:02.588   13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE
00:33:02.588   13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:33:02.588   13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable
00:33:02.588   13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x
00:33:02.588   13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=3498436
00:33:02.588   13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE
00:33:02.588   13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 3498436
00:33:02.588   13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3498436 ']'
00:33:02.588   13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:33:02.588   13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100
00:33:02.588   13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:33:02.588  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:33:02.588   13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable
00:33:02.588   13:59:02 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x
00:33:02.588  [2024-12-14 13:59:02.225038] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:33:02.588  [2024-12-14 13:59:02.225127] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:33:02.847  [2024-12-14 13:59:02.354999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:33:02.847  [2024-12-14 13:59:02.454371] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:33:02.847  [2024-12-14 13:59:02.454426] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:33:02.847  [2024-12-14 13:59:02.454439] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:33:02.847  [2024-12-14 13:59:02.454452] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:33:02.847  [2024-12-14 13:59:02.454463] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:33:02.847  [2024-12-14 13:59:02.456891] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:33:02.847  [2024-12-14 13:59:02.456961] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:33:02.847  [2024-12-14 13:59:02.456968] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:33:03.415   13:59:03 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:33:03.415   13:59:03 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0
00:33:03.416   13:59:03 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:33:03.416   13:59:03 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable
00:33:03.416   13:59:03 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x
00:33:03.416   13:59:03 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:33:03.416   13:59:03 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192
00:33:03.675  [2024-12-14 13:59:03.277157] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028540/0x7f043a348940) succeed.
00:33:03.675  [2024-12-14 13:59:03.286638] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000286c0/0x7f043a304940) succeed.
00:33:03.934   13:59:03 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0
00:33:04.193  Malloc0
00:33:04.193   13:59:03 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:33:04.452   13:59:03 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:33:04.452   13:59:04 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420
00:33:04.711  [2024-12-14 13:59:04.293156] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 ***
00:33:04.711   13:59:04 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421
00:33:04.970  [2024-12-14 13:59:04.489565] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 ***
00:33:04.970   13:59:04 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422
00:33:04.970  [2024-12-14 13:59:04.698350] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 ***
00:33:05.230   13:59:04 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3498891
00:33:05.230   13:59:04 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f
00:33:05.230   13:59:04 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:33:05.230   13:59:04 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3498891 /var/tmp/bdevperf.sock
00:33:05.230   13:59:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3498891 ']'
00:33:05.230   13:59:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:33:05.230   13:59:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100
00:33:05.230   13:59:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:33:05.230  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:33:05.230   13:59:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable
00:33:05.230   13:59:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x
00:33:06.167   13:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:33:06.167   13:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0
00:33:06.167   13:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover
00:33:06.167  NVMe0n1
00:33:06.167   13:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover
00:33:06.426  
00:33:06.426   13:59:06 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3499079
00:33:06.426   13:59:06 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests
00:33:06.426   13:59:06 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1
00:33:07.805   13:59:07 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420
00:33:07.805   13:59:07 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3
00:33:11.095   13:59:10 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover
00:33:11.095  
00:33:11.095   13:59:10 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421
00:33:11.355   13:59:10 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3
00:33:14.645   13:59:13 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420
00:33:14.645  [2024-12-14 13:59:14.011725] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 ***
00:33:14.645   13:59:14 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1
00:33:15.582   13:59:15 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422
00:33:15.582   13:59:15 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3499079
00:33:22.151  {
00:33:22.151    "results": [
00:33:22.151      {
00:33:22.151        "job": "NVMe0n1",
00:33:22.151        "core_mask": "0x1",
00:33:22.151        "workload": "verify",
00:33:22.151        "status": "finished",
00:33:22.151        "verify_range": {
00:33:22.151          "start": 0,
00:33:22.151          "length": 16384
00:33:22.151        },
00:33:22.151        "queue_depth": 128,
00:33:22.151        "io_size": 4096,
00:33:22.151        "runtime": 15.0061,
00:33:22.151        "iops": 12243.287729656606,
00:33:22.151        "mibps": 47.825342693971116,
00:33:22.151        "io_failed": 4220,
00:33:22.151        "io_timeout": 0,
00:33:22.151        "avg_latency_us": 10194.401789537309,
00:33:22.151        "min_latency_us": 507.904,
00:33:22.151        "max_latency_us": 1020054.7328
00:33:22.151      }
00:33:22.151    ],
00:33:22.151    "core_count": 1
00:33:22.151  }
00:33:22.151   13:59:21 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3498891
00:33:22.151   13:59:21 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3498891 ']'
00:33:22.151   13:59:21 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3498891
00:33:22.151    13:59:21 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname
00:33:22.151   13:59:21 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:33:22.151    13:59:21 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3498891
00:33:22.151   13:59:21 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:33:22.151   13:59:21 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:33:22.151   13:59:21 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3498891'
00:33:22.151  killing process with pid 3498891
00:33:22.151   13:59:21 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3498891
00:33:22.151   13:59:21 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3498891
00:33:22.731   13:59:22 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt
00:33:22.731  [2024-12-14 13:59:04.808543] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:33:22.731  [2024-12-14 13:59:04.808640] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3498891 ]
00:33:22.731  [2024-12-14 13:59:04.942991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:33:22.731  [2024-12-14 13:59:05.051659] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:33:22.731  Running I/O for 15 seconds...
00:33:22.731      15587.00 IOPS,    60.89 MiB/s
[2024-12-14T12:59:22.469Z]      8424.00 IOPS,    32.91 MiB/s
[2024-12-14T12:59:22.469Z] [2024-12-14 13:59:08.333494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435d000 len:0x1000 key:0x183500
00:33:22.731  [2024-12-14 13:59:08.333559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.731  [2024-12-14 13:59:08.333591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:3720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435b000 len:0x1000 key:0x183500
00:33:22.731  [2024-12-14 13:59:08.333607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.731  [2024-12-14 13:59:08.333623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004359000 len:0x1000 key:0x183500
00:33:22.731  [2024-12-14 13:59:08.333638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.731  [2024-12-14 13:59:08.333653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004357000 len:0x1000 key:0x183500
00:33:22.731  [2024-12-14 13:59:08.333670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.731  [2024-12-14 13:59:08.333684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:3744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004355000 len:0x1000 key:0x183500
00:33:22.731  [2024-12-14 13:59:08.333702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.731  [2024-12-14 13:59:08.333717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004353000 len:0x1000 key:0x183500
00:33:22.731  [2024-12-14 13:59:08.333731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.731  [2024-12-14 13:59:08.333746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:3760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004351000 len:0x1000 key:0x183500
00:33:22.731  [2024-12-14 13:59:08.333760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.731  [2024-12-14 13:59:08.333775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:3768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434f000 len:0x1000 key:0x183500
00:33:22.731  [2024-12-14 13:59:08.333789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.731  [2024-12-14 13:59:08.333804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:3776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434d000 len:0x1000 key:0x183500
00:33:22.731  [2024-12-14 13:59:08.333818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.731  [2024-12-14 13:59:08.333832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434b000 len:0x1000 key:0x183500
00:33:22.731  [2024-12-14 13:59:08.333846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.731  [2024-12-14 13:59:08.333864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:3792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004349000 len:0x1000 key:0x183500
00:33:22.731  [2024-12-14 13:59:08.333879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.731  [2024-12-14 13:59:08.333893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:3800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004347000 len:0x1000 key:0x183500
00:33:22.731  [2024-12-14 13:59:08.333907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.731  [2024-12-14 13:59:08.333922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004345000 len:0x1000 key:0x183500
00:33:22.731  [2024-12-14 13:59:08.333943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.731  [2024-12-14 13:59:08.333957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:3816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004343000 len:0x1000 key:0x183500
00:33:22.732  [2024-12-14 13:59:08.333971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.732  [2024-12-14 13:59:08.333986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004341000 len:0x1000 key:0x183500
00:33:22.732  [2024-12-14 13:59:08.334001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.732  [2024-12-14 13:59:08.334015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433f000 len:0x1000 key:0x183500
00:33:22.732  [2024-12-14 13:59:08.334031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.732  [2024-12-14 13:59:08.334045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433d000 len:0x1000 key:0x183500
00:33:22.732  [2024-12-14 13:59:08.334059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.732  [2024-12-14 13:59:08.334073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433b000 len:0x1000 key:0x183500
00:33:22.732  [2024-12-14 13:59:08.334087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.732  [2024-12-14 13:59:08.334101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:3856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004339000 len:0x1000 key:0x183500
00:33:22.732  [2024-12-14 13:59:08.334115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.732  [2024-12-14 13:59:08.334129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004337000 len:0x1000 key:0x183500
00:33:22.732  [2024-12-14 13:59:08.334143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.732  [2024-12-14 13:59:08.334157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:3872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004335000 len:0x1000 key:0x183500
00:33:22.732  [2024-12-14 13:59:08.334173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.732  [2024-12-14 13:59:08.334188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:3880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004333000 len:0x1000 key:0x183500
00:33:22.732  [2024-12-14 13:59:08.334203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.732  [2024-12-14 13:59:08.334217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004331000 len:0x1000 key:0x183500
00:33:22.732  [2024-12-14 13:59:08.334231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.732  [2024-12-14 13:59:08.334245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:3896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432f000 len:0x1000 key:0x183500
00:33:22.732  [2024-12-14 13:59:08.334259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.732  [2024-12-14 13:59:08.334273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:3904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432d000 len:0x1000 key:0x183500
00:33:22.732  [2024-12-14 13:59:08.334287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.732  [2024-12-14 13:59:08.334300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:3912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432b000 len:0x1000 key:0x183500
00:33:22.732  [2024-12-14 13:59:08.334314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.732  [2024-12-14 13:59:08.334328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004329000 len:0x1000 key:0x183500
00:33:22.732  [2024-12-14 13:59:08.334342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.732  [2024-12-14 13:59:08.334355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:3928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004327000 len:0x1000 key:0x183500
00:33:22.732  [2024-12-14 13:59:08.334369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.732  [2024-12-14 13:59:08.334384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004325000 len:0x1000 key:0x183500
00:33:22.732  [2024-12-14 13:59:08.334402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.732  [2024-12-14 13:59:08.334416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004323000 len:0x1000 key:0x183500
00:33:22.732  [2024-12-14 13:59:08.334430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.732  [2024-12-14 13:59:08.334444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004321000 len:0x1000 key:0x183500
00:33:22.732  [2024-12-14 13:59:08.334458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.732  [2024-12-14 13:59:08.334472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:3960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431f000 len:0x1000 key:0x183500
00:33:22.732  [2024-12-14 13:59:08.334487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.732  [2024-12-14 13:59:08.334502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431d000 len:0x1000 key:0x183500
00:33:22.732  [2024-12-14 13:59:08.334517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.732  [2024-12-14 13:59:08.334532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431b000 len:0x1000 key:0x183500
00:33:22.732  [2024-12-14 13:59:08.334546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.732  [2024-12-14 13:59:08.334559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:3984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004319000 len:0x1000 key:0x183500
00:33:22.732  [2024-12-14 13:59:08.334573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.732  [2024-12-14 13:59:08.334588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004317000 len:0x1000 key:0x183500
00:33:22.732  [2024-12-14 13:59:08.334603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.732  [2024-12-14 13:59:08.334619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:4000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004315000 len:0x1000 key:0x183500
00:33:22.732  [2024-12-14 13:59:08.334635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.732  [2024-12-14 13:59:08.334649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:4008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004313000 len:0x1000 key:0x183500
00:33:22.732  [2024-12-14 13:59:08.334663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.732  [2024-12-14 13:59:08.334677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:4016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004311000 len:0x1000 key:0x183500
00:33:22.732  [2024-12-14 13:59:08.334691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.732  [2024-12-14 13:59:08.334706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:4024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430f000 len:0x1000 key:0x183500
00:33:22.732  [2024-12-14 13:59:08.334720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.732  [2024-12-14 13:59:08.334735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430d000 len:0x1000 key:0x183500
00:33:22.732  [2024-12-14 13:59:08.334750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.732  [2024-12-14 13:59:08.334764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430b000 len:0x1000 key:0x183500
00:33:22.732  [2024-12-14 13:59:08.334778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.732  [2024-12-14 13:59:08.334792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:4048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004309000 len:0x1000 key:0x183500
00:33:22.732  [2024-12-14 13:59:08.334806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.732  [2024-12-14 13:59:08.334820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004307000 len:0x1000 key:0x183500
00:33:22.732  [2024-12-14 13:59:08.334834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.732  [2024-12-14 13:59:08.334848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:4064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004305000 len:0x1000 key:0x183500
00:33:22.732  [2024-12-14 13:59:08.334866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.732  [2024-12-14 13:59:08.334880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004303000 len:0x1000 key:0x183500
00:33:22.732  [2024-12-14 13:59:08.334894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.732  [2024-12-14 13:59:08.334908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004301000 len:0x1000 key:0x183500
00:33:22.732  [2024-12-14 13:59:08.334922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.732  [2024-12-14 13:59:08.334940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:4088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000042ff000 len:0x1000 key:0x183500
00:33:22.732  [2024-12-14 13:59:08.334954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.732  [2024-12-14 13:59:08.334969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:4096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.732  [2024-12-14 13:59:08.334983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.732  [2024-12-14 13:59:08.334996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.732  [2024-12-14 13:59:08.335011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.732  [2024-12-14 13:59:08.335024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.732  [2024-12-14 13:59:08.335038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.732  [2024-12-14 13:59:08.335052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:4120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.732  [2024-12-14 13:59:08.335066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.733  [2024-12-14 13:59:08.335079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:4128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.733  [2024-12-14 13:59:08.335097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.733  [2024-12-14 13:59:08.335111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.733  [2024-12-14 13:59:08.335126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.733  [2024-12-14 13:59:08.335139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.733  [2024-12-14 13:59:08.335153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.733  [2024-12-14 13:59:08.335166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.733  [2024-12-14 13:59:08.335181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.733  [2024-12-14 13:59:08.335194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:4160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.733  [2024-12-14 13:59:08.335208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.733  [2024-12-14 13:59:08.335222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:4168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.733  [2024-12-14 13:59:08.335236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.733  [2024-12-14 13:59:08.335250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.733  [2024-12-14 13:59:08.335264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.733  [2024-12-14 13:59:08.335278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.733  [2024-12-14 13:59:08.335292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.733  [2024-12-14 13:59:08.335305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:4192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.733  [2024-12-14 13:59:08.335322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.733  [2024-12-14 13:59:08.335335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.733  [2024-12-14 13:59:08.335357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.733  [2024-12-14 13:59:08.335370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.733  [2024-12-14 13:59:08.335384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.733  [2024-12-14 13:59:08.335397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:4216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.733  [2024-12-14 13:59:08.335411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.733  [2024-12-14 13:59:08.335425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:4224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.733  [2024-12-14 13:59:08.335440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.733  [2024-12-14 13:59:08.335454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:4232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.733  [2024-12-14 13:59:08.335468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.733  [2024-12-14 13:59:08.335482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.733  [2024-12-14 13:59:08.335496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.733  [2024-12-14 13:59:08.335509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.733  [2024-12-14 13:59:08.335523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.733  [2024-12-14 13:59:08.335536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:4256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.733  [2024-12-14 13:59:08.335553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.733  [2024-12-14 13:59:08.335566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.733  [2024-12-14 13:59:08.335580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.733  [2024-12-14 13:59:08.335595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:4272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.733  [2024-12-14 13:59:08.335609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.733  [2024-12-14 13:59:08.335622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.733  [2024-12-14 13:59:08.335636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.733  [2024-12-14 13:59:08.335649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.733  [2024-12-14 13:59:08.335663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.733  [2024-12-14 13:59:08.335676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.733  [2024-12-14 13:59:08.335690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.733  [2024-12-14 13:59:08.335704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:4304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.733  [2024-12-14 13:59:08.335718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.733  [2024-12-14 13:59:08.335731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:4312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.733  [2024-12-14 13:59:08.335745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.733  [2024-12-14 13:59:08.335759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.733  [2024-12-14 13:59:08.335776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.733  [2024-12-14 13:59:08.335790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:4328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.733  [2024-12-14 13:59:08.335804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.733  [2024-12-14 13:59:08.335818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.733  [2024-12-14 13:59:08.335832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.733  [2024-12-14 13:59:08.335845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.733  [2024-12-14 13:59:08.335859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.733  [2024-12-14 13:59:08.335872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.733  [2024-12-14 13:59:08.335886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.733  [2024-12-14 13:59:08.335900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.733  [2024-12-14 13:59:08.335913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.733  [2024-12-14 13:59:08.335927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:4368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.733  [2024-12-14 13:59:08.335947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.733  [2024-12-14 13:59:08.335961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:4376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.733  [2024-12-14 13:59:08.335975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.733  [2024-12-14 13:59:08.335989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.733  [2024-12-14 13:59:08.336005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.733  [2024-12-14 13:59:08.336019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.733  [2024-12-14 13:59:08.336032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.733  [2024-12-14 13:59:08.336046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:4400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.733  [2024-12-14 13:59:08.336060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.733  [2024-12-14 13:59:08.336073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.733  [2024-12-14 13:59:08.336087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.733  [2024-12-14 13:59:08.336100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.733  [2024-12-14 13:59:08.336114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.733  [2024-12-14 13:59:08.336127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:4424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.733  [2024-12-14 13:59:08.336142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.733  [2024-12-14 13:59:08.336156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:4432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.733  [2024-12-14 13:59:08.336170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.733  [2024-12-14 13:59:08.336184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:4440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.733  [2024-12-14 13:59:08.336198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.733  [2024-12-14 13:59:08.336212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.734  [2024-12-14 13:59:08.336228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.734  [2024-12-14 13:59:08.336242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.734  [2024-12-14 13:59:08.336256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.734  [2024-12-14 13:59:08.336269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.734  [2024-12-14 13:59:08.336283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.734  [2024-12-14 13:59:08.336298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.734  [2024-12-14 13:59:08.336312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.734  [2024-12-14 13:59:08.336325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.734  [2024-12-14 13:59:08.336339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.734  [2024-12-14 13:59:08.336353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.734  [2024-12-14 13:59:08.336367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.734  [2024-12-14 13:59:08.336380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:4496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.734  [2024-12-14 13:59:08.336394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.734  [2024-12-14 13:59:08.336408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:4504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.734  [2024-12-14 13:59:08.336422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.734  [2024-12-14 13:59:08.336436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.734  [2024-12-14 13:59:08.336452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.734  [2024-12-14 13:59:08.336465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.734  [2024-12-14 13:59:08.336480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.734  [2024-12-14 13:59:08.336494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.734  [2024-12-14 13:59:08.336508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.734  [2024-12-14 13:59:08.336521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.734  [2024-12-14 13:59:08.336535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.734  [2024-12-14 13:59:08.336548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.734  [2024-12-14 13:59:08.336562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.734  [2024-12-14 13:59:08.336575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.734  [2024-12-14 13:59:08.336589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.734  [2024-12-14 13:59:08.336604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.734  [2024-12-14 13:59:08.336617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.734  [2024-12-14 13:59:08.336631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.734  [2024-12-14 13:59:08.336649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.734  [2024-12-14 13:59:08.336665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:4576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.734  [2024-12-14 13:59:08.336682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.734  [2024-12-14 13:59:08.336695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:4584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.734  [2024-12-14 13:59:08.336709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.734  [2024-12-14 13:59:08.336723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.734  [2024-12-14 13:59:08.336737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.734  [2024-12-14 13:59:08.336751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.734  [2024-12-14 13:59:08.336765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.734  [2024-12-14 13:59:08.336778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.734  [2024-12-14 13:59:08.336792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.734  [2024-12-14 13:59:08.336806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.734  [2024-12-14 13:59:08.336820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.734  [2024-12-14 13:59:08.336833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.734  [2024-12-14 13:59:08.336848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.734  [2024-12-14 13:59:08.336861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.734  [2024-12-14 13:59:08.336876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.734  [2024-12-14 13:59:08.336889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.734  [2024-12-14 13:59:08.336905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.734  [2024-12-14 13:59:08.336918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.734  [2024-12-14 13:59:08.336940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.734  [2024-12-14 13:59:08.336954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:4656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.734  [2024-12-14 13:59:08.336968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.734  [2024-12-14 13:59:08.336981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.734  [2024-12-14 13:59:08.336995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.734  [2024-12-14 13:59:08.337009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.734  [2024-12-14 13:59:08.337025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.734  [2024-12-14 13:59:08.337038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.734  [2024-12-14 13:59:08.337052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.734  [2024-12-14 13:59:08.337067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.734  [2024-12-14 13:59:08.337081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.734  [2024-12-14 13:59:08.337094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.734  [2024-12-14 13:59:08.337109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.734  [2024-12-14 13:59:08.337123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.734  [2024-12-14 13:59:08.337138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.734  [2024-12-14 13:59:08.337152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.734  [2024-12-14 13:59:08.337173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.734  [2024-12-14 13:59:08.337187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.734  [2024-12-14 13:59:08.337200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.734  [2024-12-14 13:59:08.339255] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:33:22.734  [2024-12-14 13:59:08.339281] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:33:22.734  [2024-12-14 13:59:08.339294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4728 len:8 PRP1 0x0 PRP2 0x0
00:33:22.734  [2024-12-14 13:59:08.339311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.734  [2024-12-14 13:59:08.339485] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 192.168.100.8:4420 to 192.168.100.8:4421
00:33:22.734  [2024-12-14 13:59:08.339505] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state.
00:33:22.734  [2024-12-14 13:59:08.342578] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller
00:33:22.734  [2024-12-14 13:59:08.370994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0
00:33:22.734  [2024-12-14 13:59:08.415723] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful.
00:33:22.734       9905.00 IOPS,    38.69 MiB/s
[2024-12-14T12:59:22.472Z]     11296.50 IOPS,    44.13 MiB/s
[2024-12-14T12:59:22.472Z]     10739.20 IOPS,    41.95 MiB/s
[2024-12-14T12:59:22.472Z] [2024-12-14 13:59:11.818697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:49144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434b000 len:0x1000 key:0x181500
00:33:22.734  [2024-12-14 13:59:11.818758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.734  [2024-12-14 13:59:11.818794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:49152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004349000 len:0x1000 key:0x181500
00:33:22.734  [2024-12-14 13:59:11.818812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.735  [2024-12-14 13:59:11.818829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:49160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004347000 len:0x1000 key:0x181500
00:33:22.735  [2024-12-14 13:59:11.818842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.735  [2024-12-14 13:59:11.818858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:49168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004345000 len:0x1000 key:0x181500
00:33:22.735  [2024-12-14 13:59:11.818871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.735  [2024-12-14 13:59:11.818887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:49176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004343000 len:0x1000 key:0x181500
00:33:22.735  [2024-12-14 13:59:11.818899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.735  [2024-12-14 13:59:11.818915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:49184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004341000 len:0x1000 key:0x181500
00:33:22.735  [2024-12-14 13:59:11.818936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.735  [2024-12-14 13:59:11.818954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:49192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433d000 len:0x1000 key:0x181500
00:33:22.735  [2024-12-14 13:59:11.818966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.735  [2024-12-14 13:59:11.818982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:49200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437d000 len:0x1000 key:0x181500
00:33:22.735  [2024-12-14 13:59:11.818994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.735  [2024-12-14 13:59:11.819013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:49712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.735  [2024-12-14 13:59:11.819025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.735  [2024-12-14 13:59:11.819043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:49720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.735  [2024-12-14 13:59:11.819056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.735  [2024-12-14 13:59:11.819072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:49728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.735  [2024-12-14 13:59:11.819084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.735  [2024-12-14 13:59:11.819100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:49736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.735  [2024-12-14 13:59:11.819112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.735  [2024-12-14 13:59:11.819128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:49744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.735  [2024-12-14 13:59:11.819139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.735  [2024-12-14 13:59:11.819155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:49752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.735  [2024-12-14 13:59:11.819169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.735  [2024-12-14 13:59:11.819185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:49760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.735  [2024-12-14 13:59:11.819197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.735  [2024-12-14 13:59:11.819212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:49768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.735  [2024-12-14 13:59:11.819224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.735  [2024-12-14 13:59:11.819240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:49776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.735  [2024-12-14 13:59:11.819252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.735  [2024-12-14 13:59:11.819271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:49784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.735  [2024-12-14 13:59:11.819282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.735  [2024-12-14 13:59:11.819298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:49792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.735  [2024-12-14 13:59:11.819310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.735  [2024-12-14 13:59:11.819326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:49800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.735  [2024-12-14 13:59:11.819337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.735  [2024-12-14 13:59:11.819355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:49208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004393000 len:0x1000 key:0x181500
00:33:22.735  [2024-12-14 13:59:11.819367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.735  [2024-12-14 13:59:11.819384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:49216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004391000 len:0x1000 key:0x181500
00:33:22.735  [2024-12-14 13:59:11.819396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.735  [2024-12-14 13:59:11.819411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:49224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438f000 len:0x1000 key:0x181500
00:33:22.735  [2024-12-14 13:59:11.819423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.735  [2024-12-14 13:59:11.819439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:49232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434d000 len:0x1000 key:0x181500
00:33:22.735  [2024-12-14 13:59:11.819451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.735  [2024-12-14 13:59:11.819467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:49240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004331000 len:0x1000 key:0x181500
00:33:22.735  [2024-12-14 13:59:11.819478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.735  [2024-12-14 13:59:11.819496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:49248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004333000 len:0x1000 key:0x181500
00:33:22.735  [2024-12-14 13:59:11.819508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.735  [2024-12-14 13:59:11.819525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:49256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004335000 len:0x1000 key:0x181500
00:33:22.735  [2024-12-14 13:59:11.819537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.735  [2024-12-14 13:59:11.819553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:49264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004337000 len:0x1000 key:0x181500
00:33:22.735  [2024-12-14 13:59:11.819565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.735  [2024-12-14 13:59:11.819581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:49808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.735  [2024-12-14 13:59:11.819593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.735  [2024-12-14 13:59:11.819608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:49816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.735  [2024-12-14 13:59:11.819620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.735  [2024-12-14 13:59:11.819636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:49824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.735  [2024-12-14 13:59:11.819647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.735  [2024-12-14 13:59:11.819663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:49832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.735  [2024-12-14 13:59:11.819675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.735  [2024-12-14 13:59:11.819691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:49840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.735  [2024-12-14 13:59:11.819702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.735  [2024-12-14 13:59:11.819722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:49272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432f000 len:0x1000 key:0x181500
00:33:22.735  [2024-12-14 13:59:11.819734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.735  [2024-12-14 13:59:11.819750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:49280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435b000 len:0x1000 key:0x181500
00:33:22.735  [2024-12-14 13:59:11.819762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.735  [2024-12-14 13:59:11.819778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:49288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004359000 len:0x1000 key:0x181500
00:33:22.736  [2024-12-14 13:59:11.819790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.736  [2024-12-14 13:59:11.819806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:49296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004357000 len:0x1000 key:0x181500
00:33:22.736  [2024-12-14 13:59:11.819818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.736  [2024-12-14 13:59:11.819834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:49304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004355000 len:0x1000 key:0x181500
00:33:22.736  [2024-12-14 13:59:11.819846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.736  [2024-12-14 13:59:11.819863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:49312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004303000 len:0x1000 key:0x181500
00:33:22.736  [2024-12-14 13:59:11.819875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.736  [2024-12-14 13:59:11.819891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:49320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004301000 len:0x1000 key:0x181500
00:33:22.736  [2024-12-14 13:59:11.819903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.736  [2024-12-14 13:59:11.819919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:49328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435f000 len:0x1000 key:0x181500
00:33:22.736  [2024-12-14 13:59:11.819935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.736  [2024-12-14 13:59:11.819953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:49848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.736  [2024-12-14 13:59:11.819965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.736  [2024-12-14 13:59:11.819981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:49856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.736  [2024-12-14 13:59:11.819993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.736  [2024-12-14 13:59:11.820008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:49864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.736  [2024-12-14 13:59:11.820020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.736  [2024-12-14 13:59:11.820035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:49872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.736  [2024-12-14 13:59:11.820047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.736  [2024-12-14 13:59:11.820065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:49880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.736  [2024-12-14 13:59:11.820077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.736  [2024-12-14 13:59:11.820093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:49888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.736  [2024-12-14 13:59:11.820105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.736  [2024-12-14 13:59:11.820120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:49896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.736  [2024-12-14 13:59:11.820132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.736  [2024-12-14 13:59:11.820148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:49904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.736  [2024-12-14 13:59:11.820160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.736  [2024-12-14 13:59:11.820178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:49336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435d000 len:0x1000 key:0x181500
00:33:22.736  [2024-12-14 13:59:11.820190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.736  [2024-12-14 13:59:11.820208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:49344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436b000 len:0x1000 key:0x181500
00:33:22.736  [2024-12-14 13:59:11.820220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.736  [2024-12-14 13:59:11.820244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:49352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004369000 len:0x1000 key:0x181500
00:33:22.736  [2024-12-14 13:59:11.820256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.736  [2024-12-14 13:59:11.820272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:49360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004367000 len:0x1000 key:0x181500
00:33:22.736  [2024-12-14 13:59:11.820284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.736  [2024-12-14 13:59:11.820300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:49368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004365000 len:0x1000 key:0x181500
00:33:22.736  [2024-12-14 13:59:11.820312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.736  [2024-12-14 13:59:11.820328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:49376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004339000 len:0x1000 key:0x181500
00:33:22.736  [2024-12-14 13:59:11.820340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.736  [2024-12-14 13:59:11.820356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:49384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433b000 len:0x1000 key:0x181500
00:33:22.736  [2024-12-14 13:59:11.820368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.736  [2024-12-14 13:59:11.820384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:49392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437f000 len:0x1000 key:0x181500
00:33:22.736  [2024-12-14 13:59:11.820396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.736  [2024-12-14 13:59:11.820415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:49912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.736  [2024-12-14 13:59:11.820427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.736  [2024-12-14 13:59:11.820443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:49920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.736  [2024-12-14 13:59:11.820455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.736  [2024-12-14 13:59:11.820471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:49928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.736  [2024-12-14 13:59:11.820482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.736  [2024-12-14 13:59:11.820498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:49400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004363000 len:0x1000 key:0x181500
00:33:22.736  [2024-12-14 13:59:11.820510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.736  [2024-12-14 13:59:11.820526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:49408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004361000 len:0x1000 key:0x181500
00:33:22.736  [2024-12-14 13:59:11.820538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.736  [2024-12-14 13:59:11.820555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:49416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431f000 len:0x1000 key:0x181500
00:33:22.736  [2024-12-14 13:59:11.820568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.736  [2024-12-14 13:59:11.820584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:49424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431d000 len:0x1000 key:0x181500
00:33:22.736  [2024-12-14 13:59:11.820596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.736  [2024-12-14 13:59:11.820611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:49432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430b000 len:0x1000 key:0x181500
00:33:22.736  [2024-12-14 13:59:11.820624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.736  [2024-12-14 13:59:11.820642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:49440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004309000 len:0x1000 key:0x181500
00:33:22.736  [2024-12-14 13:59:11.820653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.736  [2024-12-14 13:59:11.820670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:49448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004307000 len:0x1000 key:0x181500
00:33:22.736  [2024-12-14 13:59:11.820682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.736  [2024-12-14 13:59:11.820698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:49456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004305000 len:0x1000 key:0x181500
00:33:22.736  [2024-12-14 13:59:11.820709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.736  [2024-12-14 13:59:11.820725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:49936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.736  [2024-12-14 13:59:11.820738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.736  [2024-12-14 13:59:11.820755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:49944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.736  [2024-12-14 13:59:11.820767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.736  [2024-12-14 13:59:11.820783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:49952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.736  [2024-12-14 13:59:11.820795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.736  [2024-12-14 13:59:11.820811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:49960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.736  [2024-12-14 13:59:11.820823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.736  [2024-12-14 13:59:11.820839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:49968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.736  [2024-12-14 13:59:11.820850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.736  [2024-12-14 13:59:11.820868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:49976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.736  [2024-12-14 13:59:11.820880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.736  [2024-12-14 13:59:11.820897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:49984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.736  [2024-12-14 13:59:11.820909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.737  [2024-12-14 13:59:11.820924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:49992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.737  [2024-12-14 13:59:11.820941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.737  [2024-12-14 13:59:11.820956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:50000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.737  [2024-12-14 13:59:11.820969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.737  [2024-12-14 13:59:11.820984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:50008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.737  [2024-12-14 13:59:11.820996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.737  [2024-12-14 13:59:11.821011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:50016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.737  [2024-12-14 13:59:11.821023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.737  [2024-12-14 13:59:11.821039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:50024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.737  [2024-12-14 13:59:11.821051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.737  [2024-12-14 13:59:11.821066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:50032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.737  [2024-12-14 13:59:11.821078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.737  [2024-12-14 13:59:11.821097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:49464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436d000 len:0x1000 key:0x181500
00:33:22.737  [2024-12-14 13:59:11.821112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.737  [2024-12-14 13:59:11.821128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:49472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438b000 len:0x1000 key:0x181500
00:33:22.737  [2024-12-14 13:59:11.821141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.737  [2024-12-14 13:59:11.821157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:49480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004389000 len:0x1000 key:0x181500
00:33:22.737  [2024-12-14 13:59:11.821169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.737  [2024-12-14 13:59:11.821185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:49488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004387000 len:0x1000 key:0x181500
00:33:22.737  [2024-12-14 13:59:11.821197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.737  [2024-12-14 13:59:11.821212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:49496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004385000 len:0x1000 key:0x181500
00:33:22.737  [2024-12-14 13:59:11.821224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.737  [2024-12-14 13:59:11.821241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:49504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004383000 len:0x1000 key:0x181500
00:33:22.737  [2024-12-14 13:59:11.821253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.737  [2024-12-14 13:59:11.821269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:49512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004381000 len:0x1000 key:0x181500
00:33:22.737  [2024-12-14 13:59:11.821281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.737  [2024-12-14 13:59:11.821297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:49520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430d000 len:0x1000 key:0x181500
00:33:22.737  [2024-12-14 13:59:11.821309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.737  [2024-12-14 13:59:11.821327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:49528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432d000 len:0x1000 key:0x181500
00:33:22.737  [2024-12-14 13:59:11.821339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.737  [2024-12-14 13:59:11.821354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:49536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437b000 len:0x1000 key:0x181500
00:33:22.737  [2024-12-14 13:59:11.821366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.737  [2024-12-14 13:59:11.821382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:49544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004379000 len:0x1000 key:0x181500
00:33:22.737  [2024-12-14 13:59:11.821394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.737  [2024-12-14 13:59:11.821410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:49552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004377000 len:0x1000 key:0x181500
00:33:22.737  [2024-12-14 13:59:11.821422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.737  [2024-12-14 13:59:11.821437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:49560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004375000 len:0x1000 key:0x181500
00:33:22.737  [2024-12-14 13:59:11.821449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.737  [2024-12-14 13:59:11.821466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:49568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004373000 len:0x1000 key:0x181500
00:33:22.737  [2024-12-14 13:59:11.821478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.737  [2024-12-14 13:59:11.821494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:49576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004371000 len:0x1000 key:0x181500
00:33:22.737  [2024-12-14 13:59:11.821506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.737  [2024-12-14 13:59:11.821522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:49584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438d000 len:0x1000 key:0x181500
00:33:22.737  [2024-12-14 13:59:11.821533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.737  [2024-12-14 13:59:11.821552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:50040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.737  [2024-12-14 13:59:11.821566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.737  [2024-12-14 13:59:11.821582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:50048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.737  [2024-12-14 13:59:11.821593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.737  [2024-12-14 13:59:11.821609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:50056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.737  [2024-12-14 13:59:11.821621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.737  [2024-12-14 13:59:11.821636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:50064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.737  [2024-12-14 13:59:11.821648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.737  [2024-12-14 13:59:11.821664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:50072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.737  [2024-12-14 13:59:11.821676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.737  [2024-12-14 13:59:11.821691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:50080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.737  [2024-12-14 13:59:11.821703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.737  [2024-12-14 13:59:11.821719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:50088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.737  [2024-12-14 13:59:11.821731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.737  [2024-12-14 13:59:11.821746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:50096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.737  [2024-12-14 13:59:11.821757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.737  [2024-12-14 13:59:11.821775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:49592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433f000 len:0x1000 key:0x181500
00:33:22.737  [2024-12-14 13:59:11.821787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.737  [2024-12-14 13:59:11.821804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:49600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431b000 len:0x1000 key:0x181500
00:33:22.737  [2024-12-14 13:59:11.821816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.737  [2024-12-14 13:59:11.821832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:49608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004319000 len:0x1000 key:0x181500
00:33:22.737  [2024-12-14 13:59:11.821844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.737  [2024-12-14 13:59:11.821861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:49616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004317000 len:0x1000 key:0x181500
00:33:22.737  [2024-12-14 13:59:11.821872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.737  [2024-12-14 13:59:11.821888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:49624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004315000 len:0x1000 key:0x181500
00:33:22.737  [2024-12-14 13:59:11.821900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.737  [2024-12-14 13:59:11.821918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:49632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004313000 len:0x1000 key:0x181500
00:33:22.737  [2024-12-14 13:59:11.821933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.737  [2024-12-14 13:59:11.821949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:49640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004311000 len:0x1000 key:0x181500
00:33:22.737  [2024-12-14 13:59:11.821961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.737  [2024-12-14 13:59:11.821977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:49648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436f000 len:0x1000 key:0x181500
00:33:22.737  [2024-12-14 13:59:11.821989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.737  [2024-12-14 13:59:11.822007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:50104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.737  [2024-12-14 13:59:11.822019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.738  [2024-12-14 13:59:11.822033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:50112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.738  [2024-12-14 13:59:11.822045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.738  [2024-12-14 13:59:11.822065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:50120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.738  [2024-12-14 13:59:11.822077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.738  [2024-12-14 13:59:11.822091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:50128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.738  [2024-12-14 13:59:11.822103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.738  [2024-12-14 13:59:11.822117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.738  [2024-12-14 13:59:11.822129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.738  [2024-12-14 13:59:11.822143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:50144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.738  [2024-12-14 13:59:11.822155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.738  [2024-12-14 13:59:11.822168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:50152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.738  [2024-12-14 13:59:11.822180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.738  [2024-12-14 13:59:11.822194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:50160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.738  [2024-12-14 13:59:11.822206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.738  [2024-12-14 13:59:11.822219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:49656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000042ff000 len:0x1000 key:0x181500
00:33:22.738  [2024-12-14 13:59:11.822231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.738  [2024-12-14 13:59:11.822246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:49664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432b000 len:0x1000 key:0x181500
00:33:22.738  [2024-12-14 13:59:11.822258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.738  [2024-12-14 13:59:11.822271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:49672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004329000 len:0x1000 key:0x181500
00:33:22.738  [2024-12-14 13:59:11.822283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.738  [2024-12-14 13:59:11.822297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:49680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004327000 len:0x1000 key:0x181500
00:33:22.738  [2024-12-14 13:59:11.822309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.738  [2024-12-14 13:59:11.822322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:49688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004325000 len:0x1000 key:0x181500
00:33:22.738  [2024-12-14 13:59:11.822334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.738  [2024-12-14 13:59:11.822347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:49696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004323000 len:0x1000 key:0x181500
00:33:22.738  [2024-12-14 13:59:11.822359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.738  [2024-12-14 13:59:11.824601] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:33:22.738  [2024-12-14 13:59:11.824623] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:33:22.738  [2024-12-14 13:59:11.824636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49704 len:8 PRP1 0x0 PRP2 0x0
00:33:22.738  [2024-12-14 13:59:11.824649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.738  [2024-12-14 13:59:11.824849] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 192.168.100.8:4421 to 192.168.100.8:4422
00:33:22.738  [2024-12-14 13:59:11.824866] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state.
00:33:22.738  [2024-12-14 13:59:11.827933] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller
00:33:22.738  [2024-12-14 13:59:11.856340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] CQ transport error -6 (No such device or address) on qpair id 0
00:33:22.738  [2024-12-14 13:59:11.897574] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful.
00:33:22.738       9877.83 IOPS,    38.59 MiB/s
[2024-12-14T12:59:22.476Z]     10707.43 IOPS,    41.83 MiB/s
[2024-12-14T12:59:22.476Z]     11333.25 IOPS,    44.27 MiB/s
[2024-12-14T12:59:22.476Z]     11731.89 IOPS,    45.83 MiB/s
[2024-12-14T12:59:22.476Z] [2024-12-14 13:59:16.226628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:82440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004329000 len:0x1000 key:0x183500
00:33:22.738  [2024-12-14 13:59:16.226694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.738  [2024-12-14 13:59:16.226725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:82448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004327000 len:0x1000 key:0x183500
00:33:22.738  [2024-12-14 13:59:16.226738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.738  [2024-12-14 13:59:16.226754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:82456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004325000 len:0x1000 key:0x183500
00:33:22.738  [2024-12-14 13:59:16.226770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.738  [2024-12-14 13:59:16.226785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004323000 len:0x1000 key:0x183500
00:33:22.738  [2024-12-14 13:59:16.226797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.738  [2024-12-14 13:59:16.226815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:82472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004321000 len:0x1000 key:0x183500
00:33:22.738  [2024-12-14 13:59:16.226827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.738  [2024-12-14 13:59:16.226842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:82992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.738  [2024-12-14 13:59:16.226854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.738  [2024-12-14 13:59:16.226868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.738  [2024-12-14 13:59:16.226880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.738  [2024-12-14 13:59:16.226894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:83008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.738  [2024-12-14 13:59:16.226906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.738  [2024-12-14 13:59:16.226920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:83016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.738  [2024-12-14 13:59:16.226937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.738  [2024-12-14 13:59:16.226951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:83024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.738  [2024-12-14 13:59:16.226964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.738  [2024-12-14 13:59:16.226977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:83032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.738  [2024-12-14 13:59:16.226990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.738  [2024-12-14 13:59:16.227003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:83040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.738  [2024-12-14 13:59:16.227015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.738  [2024-12-14 13:59:16.227029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:83048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.738  [2024-12-14 13:59:16.227040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.738  [2024-12-14 13:59:16.227054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:82480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004307000 len:0x1000 key:0x183500
00:33:22.738  [2024-12-14 13:59:16.227066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.738  [2024-12-14 13:59:16.227080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:82488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004309000 len:0x1000 key:0x183500
00:33:22.738  [2024-12-14 13:59:16.227092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.738  [2024-12-14 13:59:16.227107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:82496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430b000 len:0x1000 key:0x183500
00:33:22.738  [2024-12-14 13:59:16.227119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.738  [2024-12-14 13:59:16.227134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:82504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431d000 len:0x1000 key:0x183500
00:33:22.738  [2024-12-14 13:59:16.227145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.738  [2024-12-14 13:59:16.227160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:82512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431f000 len:0x1000 key:0x183500
00:33:22.738  [2024-12-14 13:59:16.227172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.738  [2024-12-14 13:59:16.227185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004361000 len:0x1000 key:0x183500
00:33:22.738  [2024-12-14 13:59:16.227197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.738  [2024-12-14 13:59:16.227211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:82528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004363000 len:0x1000 key:0x183500
00:33:22.738  [2024-12-14 13:59:16.227223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.738  [2024-12-14 13:59:16.227236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:82536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437f000 len:0x1000 key:0x183500
00:33:22.738  [2024-12-14 13:59:16.227248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.738  [2024-12-14 13:59:16.227263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:82544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433b000 len:0x1000 key:0x183500
00:33:22.738  [2024-12-14 13:59:16.227275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.739  [2024-12-14 13:59:16.227289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:82552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004339000 len:0x1000 key:0x183500
00:33:22.739  [2024-12-14 13:59:16.227301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.739  [2024-12-14 13:59:16.227315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:82560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004365000 len:0x1000 key:0x183500
00:33:22.739  [2024-12-14 13:59:16.227327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.739  [2024-12-14 13:59:16.227341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:82568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004367000 len:0x1000 key:0x183500
00:33:22.739  [2024-12-14 13:59:16.227352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.739  [2024-12-14 13:59:16.227366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:82576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004369000 len:0x1000 key:0x183500
00:33:22.739  [2024-12-14 13:59:16.227378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.739  [2024-12-14 13:59:16.227391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:82584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436b000 len:0x1000 key:0x183500
00:33:22.739  [2024-12-14 13:59:16.227404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.739  [2024-12-14 13:59:16.227418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:82592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435d000 len:0x1000 key:0x183500
00:33:22.739  [2024-12-14 13:59:16.227430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.739  [2024-12-14 13:59:16.227443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:82600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435f000 len:0x1000 key:0x183500
00:33:22.739  [2024-12-14 13:59:16.227455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.739  [2024-12-14 13:59:16.227469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:82608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004301000 len:0x1000 key:0x183500
00:33:22.739  [2024-12-14 13:59:16.227480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.739  [2024-12-14 13:59:16.227494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004303000 len:0x1000 key:0x183500
00:33:22.739  [2024-12-14 13:59:16.227506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.739  [2024-12-14 13:59:16.227520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:82624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004355000 len:0x1000 key:0x183500
00:33:22.739  [2024-12-14 13:59:16.227532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.739  [2024-12-14 13:59:16.227546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004357000 len:0x1000 key:0x183500
00:33:22.739  [2024-12-14 13:59:16.227557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.739  [2024-12-14 13:59:16.227571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:82640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004359000 len:0x1000 key:0x183500
00:33:22.739  [2024-12-14 13:59:16.227583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.739  [2024-12-14 13:59:16.227597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435b000 len:0x1000 key:0x183500
00:33:22.739  [2024-12-14 13:59:16.227608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.739  [2024-12-14 13:59:16.227622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:82656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432f000 len:0x1000 key:0x183500
00:33:22.739  [2024-12-14 13:59:16.227634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.739  [2024-12-14 13:59:16.227647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:82664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004337000 len:0x1000 key:0x183500
00:33:22.739  [2024-12-14 13:59:16.227659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.739  [2024-12-14 13:59:16.227673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:83056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.739  [2024-12-14 13:59:16.227685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.739  [2024-12-14 13:59:16.227700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:83064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.739  [2024-12-14 13:59:16.227712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.739  [2024-12-14 13:59:16.227725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:83072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.739  [2024-12-14 13:59:16.227737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.739  [2024-12-14 13:59:16.227751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:83080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.739  [2024-12-14 13:59:16.227762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.739  [2024-12-14 13:59:16.227775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:83088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.739  [2024-12-14 13:59:16.227787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.739  [2024-12-14 13:59:16.227800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:83096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.739  [2024-12-14 13:59:16.227812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.739  [2024-12-14 13:59:16.227825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:83104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.739  [2024-12-14 13:59:16.227836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.739  [2024-12-14 13:59:16.227849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:83112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.739  [2024-12-14 13:59:16.227861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.739  [2024-12-14 13:59:16.227875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:82672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004335000 len:0x1000 key:0x183500
00:33:22.739  [2024-12-14 13:59:16.227886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.739  [2024-12-14 13:59:16.227900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:82680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004333000 len:0x1000 key:0x183500
00:33:22.739  [2024-12-14 13:59:16.227913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.739  [2024-12-14 13:59:16.227926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:82688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004331000 len:0x1000 key:0x183500
00:33:22.739  [2024-12-14 13:59:16.227942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.739  [2024-12-14 13:59:16.227956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:82696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434d000 len:0x1000 key:0x183500
00:33:22.739  [2024-12-14 13:59:16.227968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.739  [2024-12-14 13:59:16.227984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438f000 len:0x1000 key:0x183500
00:33:22.739  [2024-12-14 13:59:16.227996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.739  [2024-12-14 13:59:16.228010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:82712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004391000 len:0x1000 key:0x183500
00:33:22.739  [2024-12-14 13:59:16.228026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.739  [2024-12-14 13:59:16.228040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:82720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004393000 len:0x1000 key:0x183500
00:33:22.739  [2024-12-14 13:59:16.228053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.739  [2024-12-14 13:59:16.228066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:82728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437d000 len:0x1000 key:0x183500
00:33:22.739  [2024-12-14 13:59:16.228078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.739  [2024-12-14 13:59:16.228092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:83120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.739  [2024-12-14 13:59:16.228104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.739  [2024-12-14 13:59:16.228118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:83128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.739  [2024-12-14 13:59:16.228129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.739  [2024-12-14 13:59:16.228143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:83136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.739  [2024-12-14 13:59:16.228154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.739  [2024-12-14 13:59:16.228168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:83144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.739  [2024-12-14 13:59:16.228179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.739  [2024-12-14 13:59:16.228192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:83152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.739  [2024-12-14 13:59:16.228204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.739  [2024-12-14 13:59:16.228217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:83160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.739  [2024-12-14 13:59:16.228229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.739  [2024-12-14 13:59:16.228242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:83168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.739  [2024-12-14 13:59:16.228253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.739  [2024-12-14 13:59:16.228267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:83176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.739  [2024-12-14 13:59:16.228278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.740  [2024-12-14 13:59:16.228292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:82736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004305000 len:0x1000 key:0x183500
00:33:22.740  [2024-12-14 13:59:16.228307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.740  [2024-12-14 13:59:16.228321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:82744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436d000 len:0x1000 key:0x183500
00:33:22.740  [2024-12-14 13:59:16.228334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.740  [2024-12-14 13:59:16.228349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:82752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438b000 len:0x1000 key:0x183500
00:33:22.740  [2024-12-14 13:59:16.228360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.740  [2024-12-14 13:59:16.228374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:82760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004389000 len:0x1000 key:0x183500
00:33:22.740  [2024-12-14 13:59:16.228386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.740  [2024-12-14 13:59:16.228400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:82768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004387000 len:0x1000 key:0x183500
00:33:22.740  [2024-12-14 13:59:16.228412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.740  [2024-12-14 13:59:16.228425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:82776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004385000 len:0x1000 key:0x183500
00:33:22.740  [2024-12-14 13:59:16.228437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.740  [2024-12-14 13:59:16.228451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:82784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004383000 len:0x1000 key:0x183500
00:33:22.740  [2024-12-14 13:59:16.228462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.740  [2024-12-14 13:59:16.228476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:82792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004381000 len:0x1000 key:0x183500
00:33:22.740  [2024-12-14 13:59:16.228488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.740  [2024-12-14 13:59:16.228501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:82800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433d000 len:0x1000 key:0x183500
00:33:22.740  [2024-12-14 13:59:16.228513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.740  [2024-12-14 13:59:16.228527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004341000 len:0x1000 key:0x183500
00:33:22.740  [2024-12-14 13:59:16.228538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.740  [2024-12-14 13:59:16.228551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:82816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004343000 len:0x1000 key:0x183500
00:33:22.740  [2024-12-14 13:59:16.228563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.740  [2024-12-14 13:59:16.228577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:82824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004345000 len:0x1000 key:0x183500
00:33:22.740  [2024-12-14 13:59:16.228588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.740  [2024-12-14 13:59:16.228602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:82832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004347000 len:0x1000 key:0x183500
00:33:22.740  [2024-12-14 13:59:16.228614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.740  [2024-12-14 13:59:16.228628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004349000 len:0x1000 key:0x183500
00:33:22.740  [2024-12-14 13:59:16.228640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.740  [2024-12-14 13:59:16.228654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:82848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434b000 len:0x1000 key:0x183500
00:33:22.740  [2024-12-14 13:59:16.228665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.740  [2024-12-14 13:59:16.228680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430f000 len:0x1000 key:0x183500
00:33:22.740  [2024-12-14 13:59:16.228692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.740  [2024-12-14 13:59:16.228705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:83184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.740  [2024-12-14 13:59:16.228717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.740  [2024-12-14 13:59:16.228731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.740  [2024-12-14 13:59:16.228742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.740  [2024-12-14 13:59:16.228755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:83200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.740  [2024-12-14 13:59:16.228768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.740  [2024-12-14 13:59:16.228781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:83208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.740  [2024-12-14 13:59:16.228793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.740  [2024-12-14 13:59:16.228806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:83216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.740  [2024-12-14 13:59:16.228818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.740  [2024-12-14 13:59:16.228831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:83224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.740  [2024-12-14 13:59:16.228842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.740  [2024-12-14 13:59:16.228856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:83232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.740  [2024-12-14 13:59:16.228868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.740  [2024-12-14 13:59:16.228881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:83240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.740  [2024-12-14 13:59:16.228892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.740  [2024-12-14 13:59:16.228906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:82864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434f000 len:0x1000 key:0x183500
00:33:22.740  [2024-12-14 13:59:16.228918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.740  [2024-12-14 13:59:16.228941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004351000 len:0x1000 key:0x183500
00:33:22.740  [2024-12-14 13:59:16.228954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.740  [2024-12-14 13:59:16.228968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:82880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004353000 len:0x1000 key:0x183500
00:33:22.740  [2024-12-14 13:59:16.228980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.740  [2024-12-14 13:59:16.228994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:82888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004395000 len:0x1000 key:0x183500
00:33:22.740  [2024-12-14 13:59:16.229006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.740  [2024-12-14 13:59:16.229020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:82896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004397000 len:0x1000 key:0x183500
00:33:22.740  [2024-12-14 13:59:16.229031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.740  [2024-12-14 13:59:16.229045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:82904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004399000 len:0x1000 key:0x183500
00:33:22.740  [2024-12-14 13:59:16.229057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.740  [2024-12-14 13:59:16.229071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:82912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439b000 len:0x1000 key:0x183500
00:33:22.740  [2024-12-14 13:59:16.229082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.740  [2024-12-14 13:59:16.229095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:82920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439d000 len:0x1000 key:0x183500
00:33:22.740  [2024-12-14 13:59:16.229107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.740  [2024-12-14 13:59:16.229121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:83248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.740  [2024-12-14 13:59:16.229132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.740  [2024-12-14 13:59:16.229145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:83256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.740  [2024-12-14 13:59:16.229157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.741  [2024-12-14 13:59:16.229171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:83264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.741  [2024-12-14 13:59:16.229183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.741  [2024-12-14 13:59:16.229196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:83272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.741  [2024-12-14 13:59:16.229208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.741  [2024-12-14 13:59:16.229223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:83280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.741  [2024-12-14 13:59:16.229235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.741  [2024-12-14 13:59:16.229248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.741  [2024-12-14 13:59:16.229261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.741  [2024-12-14 13:59:16.229275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:83296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.741  [2024-12-14 13:59:16.229287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.741  [2024-12-14 13:59:16.229301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.741  [2024-12-14 13:59:16.229313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.741  [2024-12-14 13:59:16.229326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:83312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.741  [2024-12-14 13:59:16.229338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.741  [2024-12-14 13:59:16.229352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:83320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.741  [2024-12-14 13:59:16.229363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.741  [2024-12-14 13:59:16.229377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:83328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.741  [2024-12-14 13:59:16.229389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.741  [2024-12-14 13:59:16.229402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:83336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.741  [2024-12-14 13:59:16.229414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.741  [2024-12-14 13:59:16.229427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.741  [2024-12-14 13:59:16.229438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.741  [2024-12-14 13:59:16.229452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:83352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.741  [2024-12-14 13:59:16.229463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.741  [2024-12-14 13:59:16.229476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:83360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.741  [2024-12-14 13:59:16.229488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.741  [2024-12-14 13:59:16.229501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:83368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.741  [2024-12-14 13:59:16.229512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.741  [2024-12-14 13:59:16.229526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:82928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004311000 len:0x1000 key:0x183500
00:33:22.741  [2024-12-14 13:59:16.229537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.741  [2024-12-14 13:59:16.229551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:82936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004313000 len:0x1000 key:0x183500
00:33:22.741  [2024-12-14 13:59:16.229562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.741  [2024-12-14 13:59:16.229577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:82944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004315000 len:0x1000 key:0x183500
00:33:22.741  [2024-12-14 13:59:16.229589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.741  [2024-12-14 13:59:16.229603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:82952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004317000 len:0x1000 key:0x183500
00:33:22.741  [2024-12-14 13:59:16.229614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.741  [2024-12-14 13:59:16.229628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:82960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004319000 len:0x1000 key:0x183500
00:33:22.741  [2024-12-14 13:59:16.229640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.741  [2024-12-14 13:59:16.229654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:82968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431b000 len:0x1000 key:0x183500
00:33:22.741  [2024-12-14 13:59:16.229665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.741  [2024-12-14 13:59:16.229679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433f000 len:0x1000 key:0x183500
00:33:22.741  [2024-12-14 13:59:16.229691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.741  [2024-12-14 13:59:16.229705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:82984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438d000 len:0x1000 key:0x183500
00:33:22.741  [2024-12-14 13:59:16.229716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.741  [2024-12-14 13:59:16.229729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:83376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.741  [2024-12-14 13:59:16.229741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.741  [2024-12-14 13:59:16.229754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:83384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.741  [2024-12-14 13:59:16.229766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.741  [2024-12-14 13:59:16.229780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:83392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.741  [2024-12-14 13:59:16.229792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.741  [2024-12-14 13:59:16.229806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:83400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.741  [2024-12-14 13:59:16.229817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.741  [2024-12-14 13:59:16.229831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.741  [2024-12-14 13:59:16.229842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.741  [2024-12-14 13:59:16.229855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:83416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.741  [2024-12-14 13:59:16.229867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.741  [2024-12-14 13:59:16.229882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:83424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.741  [2024-12-14 13:59:16.229893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.741  [2024-12-14 13:59:16.229907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:83432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.741  [2024-12-14 13:59:16.229918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.741  [2024-12-14 13:59:16.229936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:83440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.741  [2024-12-14 13:59:16.229948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.741  [2024-12-14 13:59:16.229962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:83448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:33:22.741  [2024-12-14 13:59:16.229973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.741  [2024-12-14 13:59:16.232036] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:33:22.741  [2024-12-14 13:59:16.232056] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:33:22.741  [2024-12-14 13:59:16.232068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83456 len:8 PRP1 0x0 PRP2 0x0
00:33:22.741  [2024-12-14 13:59:16.232082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:22.741  [2024-12-14 13:59:16.232272] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 192.168.100.8:4422 to 192.168.100.8:4420
00:33:22.741  [2024-12-14 13:59:16.232289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state.
00:33:22.741  [2024-12-14 13:59:16.235391] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller
00:33:22.741  [2024-12-14 13:59:16.263294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] CQ transport error -6 (No such device or address) on qpair id 0
00:33:22.741      10558.70 IOPS,    41.24 MiB/s
[2024-12-14T12:59:22.479Z] [2024-12-14 13:59:16.300215] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful.
00:33:22.741      10974.91 IOPS,    42.87 MiB/s
[2024-12-14T12:59:22.479Z]     11371.58 IOPS,    44.42 MiB/s
[2024-12-14T12:59:22.479Z]     11706.62 IOPS,    45.73 MiB/s
[2024-12-14T12:59:22.479Z]     11993.93 IOPS,    46.85 MiB/s
[2024-12-14T12:59:22.479Z]     12243.67 IOPS,    47.83 MiB/s
00:33:22.741                                                                                                  Latency(us)
00:33:22.741  
[2024-12-14T12:59:22.479Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:33:22.741  Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:33:22.741  	 Verification LBA range: start 0x0 length 0x4000
00:33:22.741  	 NVMe0n1             :      15.01   12243.29      47.83     281.22     0.00   10194.40     507.90 1020054.73
00:33:22.741  
[2024-12-14T12:59:22.479Z]  ===================================================================================================================
00:33:22.741  
[2024-12-14T12:59:22.479Z]  Total                       :              12243.29      47.83     281.22     0.00   10194.40     507.90 1020054.73
00:33:22.741  Received shutdown signal, test time was about 15.000000 seconds
00:33:22.741  
00:33:22.741                                                                                                  Latency(us)
00:33:22.741  
[2024-12-14T12:59:22.480Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:33:22.742  
[2024-12-14T12:59:22.480Z]  ===================================================================================================================
00:33:22.742  
[2024-12-14T12:59:22.480Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:33:22.742    13:59:22 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful'
00:33:22.742   13:59:22 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3
00:33:22.742   13:59:22 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 ))
00:33:22.742   13:59:22 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3501797
00:33:22.742   13:59:22 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f
00:33:22.742   13:59:22 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3501797 /var/tmp/bdevperf.sock
00:33:22.742   13:59:22 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3501797 ']'
00:33:22.742   13:59:22 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:33:22.742   13:59:22 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100
00:33:22.742   13:59:22 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:33:22.742  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:33:22.742   13:59:22 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable
00:33:22.742   13:59:22 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x
00:33:23.748   13:59:23 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:33:23.748   13:59:23 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0
00:33:23.748   13:59:23 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421
00:33:23.748  [2024-12-14 13:59:23.455153] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 ***
00:33:23.748   13:59:23 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422
00:33:24.006  [2024-12-14 13:59:23.635751] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 ***
00:33:24.006   13:59:23 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover
00:33:24.264  NVMe0n1
00:33:24.264   13:59:23 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover
00:33:24.522  
00:33:24.522   13:59:24 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover
00:33:24.780  
00:33:24.780   13:59:24 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0
00:33:24.780   13:59:24 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers
00:33:25.038   13:59:24 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1
00:33:25.295   13:59:24 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3
00:33:28.579   13:59:27 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers
00:33:28.579   13:59:27 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0
00:33:28.579   13:59:28 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3502735
00:33:28.579   13:59:28 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests
00:33:28.579   13:59:28 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3502735
00:33:29.516  {
00:33:29.516    "results": [
00:33:29.516      {
00:33:29.516        "job": "NVMe0n1",
00:33:29.516        "core_mask": "0x1",
00:33:29.516        "workload": "verify",
00:33:29.516        "status": "finished",
00:33:29.516        "verify_range": {
00:33:29.516          "start": 0,
00:33:29.516          "length": 16384
00:33:29.516        },
00:33:29.516        "queue_depth": 128,
00:33:29.516        "io_size": 4096,
00:33:29.516        "runtime": 1.011006,
00:33:29.516        "iops": 15446.001309586689,
00:33:29.516        "mibps": 60.335942615573,
00:33:29.516        "io_failed": 0,
00:33:29.516        "io_timeout": 0,
00:33:29.516        "avg_latency_us": 8240.246767213115,
00:33:29.516        "min_latency_us": 3224.3712,
00:33:29.516        "max_latency_us": 14155.776
00:33:29.516      }
00:33:29.516    ],
00:33:29.516    "core_count": 1
00:33:29.516  }
00:33:29.516   13:59:29 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt
00:33:29.516  [2024-12-14 13:59:22.499379] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:33:29.516  [2024-12-14 13:59:22.499475] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3501797 ]
00:33:29.516  [2024-12-14 13:59:22.635019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:33:29.516  [2024-12-14 13:59:22.738325] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:33:29.516  [2024-12-14 13:59:24.838468] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 192.168.100.8:4420 to 192.168.100.8:4421
00:33:29.516  [2024-12-14 13:59:24.839169] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state.
00:33:29.516  [2024-12-14 13:59:24.839229] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller
00:33:29.516  [2024-12-14 13:59:24.878290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] CQ transport error -6 (No such device or address) on qpair id 0
00:33:29.516  [2024-12-14 13:59:24.894270] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful.
00:33:29.516  Running I/O for 1 seconds...
00:33:29.516      15429.00 IOPS,    60.27 MiB/s
00:33:29.516                                                                                                  Latency(us)
00:33:29.516  
[2024-12-14T12:59:29.254Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:33:29.516  Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:33:29.516  	 Verification LBA range: start 0x0 length 0x4000
00:33:29.516  	 NVMe0n1             :       1.01   15446.00      60.34       0.00     0.00    8240.25    3224.37   14155.78
00:33:29.516  
[2024-12-14T12:59:29.254Z]  ===================================================================================================================
00:33:29.516  
[2024-12-14T12:59:29.254Z]  Total                       :              15446.00      60.34       0.00     0.00    8240.25    3224.37   14155.78
00:33:29.516   13:59:29 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers
00:33:29.516   13:59:29 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0
00:33:29.775   13:59:29 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1
00:33:30.034   13:59:29 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0
00:33:30.034   13:59:29 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers
00:33:30.293   13:59:29 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1
00:33:30.293   13:59:30 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3
00:33:33.580   13:59:33 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers
00:33:33.580   13:59:33 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0
00:33:33.580   13:59:33 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3501797
00:33:33.580   13:59:33 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3501797 ']'
00:33:33.580   13:59:33 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3501797
00:33:33.580    13:59:33 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname
00:33:33.580   13:59:33 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:33:33.580    13:59:33 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3501797
00:33:33.580   13:59:33 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:33:33.580   13:59:33 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:33:33.580   13:59:33 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3501797'
00:33:33.580  killing process with pid 3501797
00:33:33.580   13:59:33 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3501797
00:33:33.580   13:59:33 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3501797
00:33:34.516   13:59:34 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync
00:33:34.516   13:59:34 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:33:34.775   13:59:34 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT
00:33:34.775   13:59:34 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt
00:33:34.775   13:59:34 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini
00:33:34.775   13:59:34 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup
00:33:34.775   13:59:34 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync
00:33:34.775   13:59:34 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' rdma == tcp ']'
00:33:34.775   13:59:34 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' rdma == rdma ']'
00:33:34.775   13:59:34 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e
00:33:34.775   13:59:34 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20}
00:33:34.775   13:59:34 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma
00:33:34.775  rmmod nvme_rdma
00:33:34.775  rmmod nvme_fabrics
00:33:34.775   13:59:34 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:33:34.775   13:59:34 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e
00:33:34.775   13:59:34 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0
00:33:34.775   13:59:34 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 3498436 ']'
00:33:34.775   13:59:34 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 3498436
00:33:34.775   13:59:34 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3498436 ']'
00:33:34.775   13:59:34 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3498436
00:33:34.775    13:59:34 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname
00:33:34.775   13:59:34 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:33:34.775    13:59:34 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3498436
00:33:34.775   13:59:34 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:33:34.775   13:59:34 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:33:34.775   13:59:34 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3498436'
00:33:34.775  killing process with pid 3498436
00:33:34.775   13:59:34 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3498436
00:33:34.775   13:59:34 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3498436
00:33:36.679   13:59:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:33:36.679   13:59:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]]
00:33:36.679  
00:33:36.679  real	0m41.082s
00:33:36.679  user	2m15.044s
00:33:36.679  sys	0m8.099s
00:33:36.679   13:59:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable
00:33:36.679   13:59:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x
00:33:36.679  ************************************
00:33:36.679  END TEST nvmf_failover
00:33:36.679  ************************************
00:33:36.679   13:59:36 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma
00:33:36.679   13:59:36 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:33:36.679   13:59:36 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable
00:33:36.679   13:59:36 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:33:36.679  ************************************
00:33:36.679  START TEST nvmf_host_discovery
00:33:36.679  ************************************
00:33:36.679   13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma
00:33:36.679  * Looking for test storage...
00:33:36.679  * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host
00:33:36.679    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:33:36.679     13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version
00:33:36.679     13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:33:36.679    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:33:36.679    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:33:36.679    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l
00:33:36.679    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l
00:33:36.679    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-:
00:33:36.679    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1
00:33:36.679    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-:
00:33:36.679    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2
00:33:36.679    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<'
00:33:36.679    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2
00:33:36.679    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1
00:33:36.679    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:33:36.679    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in
00:33:36.679    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1
00:33:36.679    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 ))
00:33:36.679    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:33:36.679     13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1
00:33:36.679     13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1
00:33:36.679     13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:33:36.679     13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1
00:33:36.679    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1
00:33:36.679     13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2
00:33:36.679     13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2
00:33:36.679     13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:33:36.679     13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2
00:33:36.679    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2
00:33:36.679    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:33:36.679    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:33:36.679    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0
00:33:36.679    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:33:36.679    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:33:36.679  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:33:36.679  		--rc genhtml_branch_coverage=1
00:33:36.679  		--rc genhtml_function_coverage=1
00:33:36.679  		--rc genhtml_legend=1
00:33:36.679  		--rc geninfo_all_blocks=1
00:33:36.679  		--rc geninfo_unexecuted_blocks=1
00:33:36.679  		
00:33:36.679  		'
00:33:36.679    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:33:36.679  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:33:36.679  		--rc genhtml_branch_coverage=1
00:33:36.679  		--rc genhtml_function_coverage=1
00:33:36.679  		--rc genhtml_legend=1
00:33:36.679  		--rc geninfo_all_blocks=1
00:33:36.679  		--rc geninfo_unexecuted_blocks=1
00:33:36.679  		
00:33:36.679  		'
00:33:36.679    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:33:36.679  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:33:36.679  		--rc genhtml_branch_coverage=1
00:33:36.679  		--rc genhtml_function_coverage=1
00:33:36.679  		--rc genhtml_legend=1
00:33:36.679  		--rc geninfo_all_blocks=1
00:33:36.679  		--rc geninfo_unexecuted_blocks=1
00:33:36.679  		
00:33:36.679  		'
00:33:36.680    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:33:36.680  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:33:36.680  		--rc genhtml_branch_coverage=1
00:33:36.680  		--rc genhtml_function_coverage=1
00:33:36.680  		--rc genhtml_legend=1
00:33:36.680  		--rc geninfo_all_blocks=1
00:33:36.680  		--rc geninfo_unexecuted_blocks=1
00:33:36.680  		
00:33:36.680  		'
00:33:36.680   13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh
00:33:36.680     13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s
00:33:36.680    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:33:36.680    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:33:36.680    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:33:36.680    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:33:36.680    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:33:36.680    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:33:36.680    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:33:36.680    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:33:36.680    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:33:36.680     13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:33:36.939    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:33:36.939    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e
00:33:36.939    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:33:36.939    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:33:36.939    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:33:36.939    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:33:36.939    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh
00:33:36.939     13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob
00:33:36.939     13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:33:36.939     13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:33:36.939     13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:33:36.939      13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:33:36.939      13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:33:36.939      13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:33:36.939      13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH
00:33:36.939      13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:33:36.939    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0
00:33:36.939    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:33:36.939    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:33:36.939    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:33:36.939    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:33:36.939    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:33:36.939    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:33:36.939  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:33:36.939    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:33:36.939    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:33:36.940    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0
00:33:36.940   13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' rdma == rdma ']'
00:33:36.940   13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.'
00:33:36.940  Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.
00:33:36.940   13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@13 -- # exit 0
00:33:36.940  
00:33:36.940  real	0m0.210s
00:33:36.940  user	0m0.118s
00:33:36.940  sys	0m0.107s
00:33:36.940   13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable
00:33:36.940   13:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:33:36.940  ************************************
00:33:36.940  END TEST nvmf_host_discovery
00:33:36.940  ************************************
00:33:36.940   13:59:36 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma
00:33:36.940   13:59:36 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:33:36.940   13:59:36 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable
00:33:36.940   13:59:36 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:33:36.940  ************************************
00:33:36.940  START TEST nvmf_host_multipath_status
00:33:36.940  ************************************
00:33:36.940   13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma
00:33:36.940  * Looking for test storage...
00:33:36.940  * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host
00:33:36.940    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:33:36.940     13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version
00:33:36.940     13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:33:37.199    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:33:37.199    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:33:37.199    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l
00:33:37.199    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l
00:33:37.199    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-:
00:33:37.199    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1
00:33:37.199    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-:
00:33:37.199    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2
00:33:37.199    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<'
00:33:37.199    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2
00:33:37.199    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1
00:33:37.199    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:33:37.199    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in
00:33:37.199    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1
00:33:37.199    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 ))
00:33:37.199    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:33:37.199     13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1
00:33:37.199     13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1
00:33:37.199     13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:33:37.199     13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1
00:33:37.199    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1
00:33:37.199     13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2
00:33:37.199     13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2
00:33:37.199     13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:33:37.199     13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2
00:33:37.200    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2
00:33:37.200    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:33:37.200    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:33:37.200    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0
00:33:37.200    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:33:37.200    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:33:37.200  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:33:37.200  		--rc genhtml_branch_coverage=1
00:33:37.200  		--rc genhtml_function_coverage=1
00:33:37.200  		--rc genhtml_legend=1
00:33:37.200  		--rc geninfo_all_blocks=1
00:33:37.200  		--rc geninfo_unexecuted_blocks=1
00:33:37.200  		
00:33:37.200  		'
00:33:37.200    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:33:37.200  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:33:37.200  		--rc genhtml_branch_coverage=1
00:33:37.200  		--rc genhtml_function_coverage=1
00:33:37.200  		--rc genhtml_legend=1
00:33:37.200  		--rc geninfo_all_blocks=1
00:33:37.200  		--rc geninfo_unexecuted_blocks=1
00:33:37.200  		
00:33:37.200  		'
00:33:37.200    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:33:37.200  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:33:37.200  		--rc genhtml_branch_coverage=1
00:33:37.200  		--rc genhtml_function_coverage=1
00:33:37.200  		--rc genhtml_legend=1
00:33:37.200  		--rc geninfo_all_blocks=1
00:33:37.200  		--rc geninfo_unexecuted_blocks=1
00:33:37.200  		
00:33:37.200  		'
00:33:37.200    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:33:37.200  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:33:37.200  		--rc genhtml_branch_coverage=1
00:33:37.200  		--rc genhtml_function_coverage=1
00:33:37.200  		--rc genhtml_legend=1
00:33:37.200  		--rc geninfo_all_blocks=1
00:33:37.200  		--rc geninfo_unexecuted_blocks=1
00:33:37.200  		
00:33:37.200  		'
00:33:37.200   13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh
00:33:37.200     13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s
00:33:37.200    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:33:37.200    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:33:37.200    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:33:37.200    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:33:37.200    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:33:37.200    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:33:37.200    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:33:37.200    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:33:37.200    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:33:37.200     13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:33:37.200    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:33:37.200    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e
00:33:37.200    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:33:37.200    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:33:37.200    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:33:37.200    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:33:37.200    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh
00:33:37.200     13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob
00:33:37.200     13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:33:37.200     13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:33:37.200     13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:33:37.200      13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:33:37.200      13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:33:37.200      13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:33:37.200      13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH
00:33:37.200      13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:33:37.200    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0
00:33:37.200    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:33:37.200    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:33:37.200    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:33:37.200    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:33:37.200    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:33:37.200    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:33:37.200  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:33:37.200    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:33:37.200    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:33:37.200    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0
00:33:37.200   13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64
00:33:37.200   13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512
00:33:37.200   13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py
00:33:37.200   13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/bpftrace.sh
00:33:37.200   13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock
00:33:37.200   13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1
00:33:37.200   13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit
00:33:37.200   13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z rdma ']'
00:33:37.200   13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:33:37.200   13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs
00:33:37.200   13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no
00:33:37.200   13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns
00:33:37.200   13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:33:37.200   13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:33:37.200    13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:33:37.200   13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:33:37.200   13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:33:37.200   13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable
00:33:37.200   13:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x
00:33:43.768   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:33:43.768   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=()
00:33:43.768   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs
00:33:43.768   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=()
00:33:43.768   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:33:43.768   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=()
00:33:43.768   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers
00:33:43.768   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=()
00:33:43.768   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs
00:33:43.768   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=()
00:33:43.768   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810
00:33:43.768   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=()
00:33:43.768   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722
00:33:43.768   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=()
00:33:43.768   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx
00:33:43.768   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:33:43.768   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:33:43.768   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:33:43.768   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:33:43.768   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:33:43.768   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:33:43.768   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:33:43.768   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:33:43.768   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:33:43.768   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:33:43.768   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:33:43.768   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:33:43.768   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:33:43.768   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ rdma == rdma ]]
00:33:43.768   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}")
00:33:43.768   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}")
00:33:43.768   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]]
00:33:43.768   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}")
00:33:43.768   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:33:43.768   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:33:43.768   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)'
00:33:43.768  Found 0000:d9:00.0 (0x15b3 - 0x1015)
00:33:43.768   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:33:43.768   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:33:43.768   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:33:43.768   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:33:43.768   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:33:43.768   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:33:43.768   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:33:43.768   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)'
00:33:43.768  Found 0000:d9:00.1 (0x15b3 - 0x1015)
00:33:43.768   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:33:43.768   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:33:43.768   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:33:43.768   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:33:43.768   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:33:43.768   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:33:43.768   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:33:43.769   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]]
00:33:43.769   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:33:43.769   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:33:43.769   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:33:43.769   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:33:43.769   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:33:43.769   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0'
00:33:43.769  Found net devices under 0000:d9:00.0: mlx_0_0
00:33:43.769   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:33:43.769   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:33:43.769   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:33:43.769   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:33:43.769   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:33:43.769   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:33:43.769   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1'
00:33:43.769  Found net devices under 0000:d9:00.1: mlx_0_1
00:33:43.769   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:33:43.769   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:33:43.769   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes
00:33:43.769   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:33:43.769   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ rdma == tcp ]]
00:33:43.769   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@447 -- # [[ rdma == rdma ]]
00:33:43.769   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # rdma_device_init
00:33:43.769   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@529 -- # load_ib_rdma_modules
00:33:43.769    13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # uname
00:33:43.769   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']'
00:33:43.769   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@66 -- # modprobe ib_cm
00:33:43.769   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@67 -- # modprobe ib_core
00:33:43.769   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@68 -- # modprobe ib_umad
00:33:43.769   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@69 -- # modprobe ib_uverbs
00:33:43.769   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@70 -- # modprobe iw_cm
00:33:43.769   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@71 -- # modprobe rdma_cm
00:33:43.769   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@72 -- # modprobe rdma_ucm
00:33:43.769   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@530 -- # allocate_nic_ips
00:33:43.769   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR ))
00:33:43.769    13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # get_rdma_if_list
00:33:43.769    13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:33:43.769    13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:33:43.769     13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:33:43.769     13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:33:43.769    13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:33:43.769    13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:33:43.769    13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:33:43.769    13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:33:43.769    13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_0
00:33:43.769    13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2
00:33:43.769    13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:33:43.769    13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:33:43.769    13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:33:43.769    13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:33:43.769    13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:33:43.769    13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_1
00:33:43.769    13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2
00:33:43.769   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:33:43.769    13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0
00:33:43.769    13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:33:43.769    13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:33:43.769    13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}'
00:33:43.769    13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1
00:33:43.769   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # ip=192.168.100.8
00:33:43.769   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]]
00:33:43.769   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@85 -- # ip addr show mlx_0_0
00:33:43.769  6: mlx_0_0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:33:43.769      link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff
00:33:43.769      altname enp217s0f0np0
00:33:43.769      altname ens818f0np0
00:33:43.769      inet 192.168.100.8/24 scope global mlx_0_0
00:33:43.769         valid_lft forever preferred_lft forever
00:33:43.769   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:33:43.769    13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1
00:33:43.769    13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:33:43.769    13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:33:43.769    13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}'
00:33:43.769    13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1
00:33:44.027   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # ip=192.168.100.9
00:33:44.027   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]]
00:33:44.027   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@85 -- # ip addr show mlx_0_1
00:33:44.027  7: mlx_0_1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:33:44.027      link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff
00:33:44.027      altname enp217s0f1np1
00:33:44.027      altname ens818f1np1
00:33:44.027      inet 192.168.100.9/24 scope global mlx_0_1
00:33:44.027         valid_lft forever preferred_lft forever
00:33:44.027   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0
00:33:44.027   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:33:44.027   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma'
00:33:44.027   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]]
00:33:44.027    13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # get_available_rdma_ips
00:33:44.027     13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # get_rdma_if_list
00:33:44.027     13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:33:44.027     13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:33:44.027      13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:33:44.027      13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:33:44.027     13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:33:44.027     13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:33:44.027     13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:33:44.027     13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:33:44.027     13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_0
00:33:44.027     13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2
00:33:44.027     13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:33:44.027     13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:33:44.027     13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:33:44.027     13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:33:44.027     13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:33:44.027     13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_1
00:33:44.027     13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2
00:33:44.027    13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:33:44.027    13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0
00:33:44.027    13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:33:44.027    13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:33:44.027    13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}'
00:33:44.027    13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1
00:33:44.027    13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:33:44.027    13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1
00:33:44.027    13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:33:44.027    13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:33:44.027    13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}'
00:33:44.027    13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1
00:33:44.027   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8
00:33:44.027  192.168.100.9'
00:33:44.028    13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # echo '192.168.100.8
00:33:44.028  192.168.100.9'
00:33:44.028    13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # head -n 1
00:33:44.028   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8
00:33:44.028    13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # echo '192.168.100.8
00:33:44.028  192.168.100.9'
00:33:44.028    13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # tail -n +2
00:33:44.028    13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # head -n 1
00:33:44.028   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9
00:33:44.028   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']'
00:33:44.028   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024'
00:33:44.028   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' rdma == tcp ']'
00:33:44.028   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' rdma == rdma ']'
00:33:44.028   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-rdma
00:33:44.028   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3
00:33:44.028   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:33:44.028   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable
00:33:44.028   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x
00:33:44.028   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=3507502
00:33:44.028   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3
00:33:44.028   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 3507502
00:33:44.028   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3507502 ']'
00:33:44.028   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:33:44.028   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100
00:33:44.028   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:33:44.028  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:33:44.028   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable
00:33:44.028   13:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x
00:33:44.028  [2024-12-14 13:59:43.730722] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:33:44.028  [2024-12-14 13:59:43.730828] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:33:44.286  [2024-12-14 13:59:43.864539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:33:44.286  [2024-12-14 13:59:43.958915] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:33:44.286  [2024-12-14 13:59:43.958965] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:33:44.286  [2024-12-14 13:59:43.958978] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:33:44.286  [2024-12-14 13:59:43.958991] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:33:44.286  [2024-12-14 13:59:43.959000] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:33:44.286  [2024-12-14 13:59:43.961053] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:33:44.286  [2024-12-14 13:59:43.961061] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:33:44.852   13:59:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:33:44.852   13:59:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0
00:33:44.852   13:59:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:33:44.852   13:59:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable
00:33:44.852   13:59:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x
00:33:44.852   13:59:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:33:44.852   13:59:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3507502
00:33:44.852   13:59:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192
00:33:45.110  [2024-12-14 13:59:44.747816] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028540/0x7f7eaa3bd940) succeed.
00:33:45.110  [2024-12-14 13:59:44.757122] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000286c0/0x7f7eaa379940) succeed.
00:33:45.368   13:59:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0
00:33:45.625  Malloc0
00:33:45.625   13:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2
00:33:45.625   13:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:33:45.881   13:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420
00:33:46.139  [2024-12-14 13:59:45.711767] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 ***
00:33:46.139   13:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421
00:33:46.397  [2024-12-14 13:59:45.896056] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 ***
00:33:46.397   13:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3507856
00:33:46.397   13:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90
00:33:46.397   13:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:33:46.397   13:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3507856 /var/tmp/bdevperf.sock
00:33:46.397   13:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3507856 ']'
00:33:46.397   13:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:33:46.397   13:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100
00:33:46.397   13:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:33:46.397  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:33:46.397   13:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable
00:33:46.397   13:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x
00:33:47.330   13:59:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:33:47.330   13:59:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0
00:33:47.330   13:59:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1
00:33:47.330   13:59:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10
00:33:47.588  Nvme0n1
00:33:47.588   13:59:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10
00:33:47.845  Nvme0n1
00:33:47.845   13:59:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests
00:33:47.845   13:59:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2
00:33:50.375   13:59:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized
00:33:50.375   13:59:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized
00:33:50.375   13:59:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized
00:33:50.375   13:59:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1
00:33:51.308   13:59:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true
00:33:51.308   13:59:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true
00:33:51.308    13:59:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:33:51.308    13:59:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current'
00:33:51.567   13:59:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:33:51.567   13:59:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false
00:33:51.567    13:59:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:33:51.567    13:59:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current'
00:33:51.825   13:59:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]]
00:33:51.825   13:59:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true
00:33:51.825    13:59:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected'
00:33:51.825    13:59:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:33:51.825   13:59:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:33:51.825   13:59:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true
00:33:51.825    13:59:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:33:51.825    13:59:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected'
00:33:52.083   13:59:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:33:52.083   13:59:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true
00:33:52.083    13:59:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:33:52.083    13:59:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible'
00:33:52.341   13:59:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:33:52.341   13:59:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true
00:33:52.341    13:59:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible'
00:33:52.341    13:59:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:33:52.599   13:59:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:33:52.599   13:59:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized
00:33:52.599   13:59:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized
00:33:52.599   13:59:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized
00:33:52.857   13:59:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1
00:33:53.791   13:59:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true
00:33:53.791   13:59:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false
00:33:53.791    13:59:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:33:53.791    13:59:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current'
00:33:54.049   13:59:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]]
00:33:54.049   13:59:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true
00:33:54.049    13:59:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:33:54.049    13:59:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current'
00:33:54.307   13:59:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:33:54.307   13:59:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true
00:33:54.307    13:59:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:33:54.307    13:59:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected'
00:33:54.565   13:59:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:33:54.565   13:59:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true
00:33:54.565    13:59:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:33:54.565    13:59:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected'
00:33:54.565   13:59:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:33:54.565   13:59:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true
00:33:54.565    13:59:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:33:54.565    13:59:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible'
00:33:54.823   13:59:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:33:54.823   13:59:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true
00:33:54.823    13:59:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:33:54.823    13:59:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible'
00:33:55.081   13:59:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:33:55.081   13:59:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized
00:33:55.081   13:59:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized
00:33:55.340   13:59:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized
00:33:55.340   13:59:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1
00:33:56.712   13:59:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true
00:33:56.712   13:59:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true
00:33:56.712    13:59:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:33:56.712    13:59:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current'
00:33:56.712   13:59:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:33:56.712   13:59:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false
00:33:56.712    13:59:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:33:56.712    13:59:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current'
00:33:56.970   13:59:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]]
00:33:56.971   13:59:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true
00:33:56.971    13:59:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:33:56.971    13:59:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected'
00:33:56.971   13:59:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:33:56.971   13:59:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true
00:33:56.971    13:59:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected'
00:33:56.971    13:59:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:33:57.229   13:59:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:33:57.229   13:59:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true
00:33:57.229    13:59:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible'
00:33:57.229    13:59:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:33:57.487   13:59:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:33:57.487   13:59:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true
00:33:57.487    13:59:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:33:57.487    13:59:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible'
00:33:57.745   13:59:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:33:57.745   13:59:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible
00:33:57.745   13:59:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized
00:33:57.745   13:59:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible
00:33:58.003   13:59:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1
00:33:58.937   13:59:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false
00:33:58.937   13:59:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true
00:33:58.937    13:59:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:33:58.937    13:59:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current'
00:33:59.195   13:59:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:33:59.195   13:59:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false
00:33:59.195    13:59:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:33:59.195    13:59:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current'
00:33:59.452   13:59:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]]
00:33:59.452   13:59:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true
00:33:59.452    13:59:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:33:59.452    13:59:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected'
00:33:59.452   13:59:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:33:59.452   13:59:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true
00:33:59.452    13:59:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:33:59.452    13:59:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected'
00:33:59.710   13:59:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:33:59.710   13:59:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true
00:33:59.710    13:59:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:33:59.710    13:59:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible'
00:33:59.968   13:59:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:33:59.968   13:59:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false
00:33:59.968    13:59:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:33:59.968    13:59:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible'
00:34:00.238   13:59:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]]
00:34:00.238   13:59:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible
00:34:00.238   13:59:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible
00:34:00.238   13:59:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible
00:34:00.507   14:00:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1
00:34:01.440   14:00:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false
00:34:01.440   14:00:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false
00:34:01.440    14:00:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:34:01.440    14:00:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current'
00:34:01.698   14:00:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]]
00:34:01.698   14:00:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false
00:34:01.698    14:00:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:34:01.698    14:00:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current'
00:34:01.956   14:00:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]]
00:34:01.956   14:00:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true
00:34:01.956    14:00:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:34:01.956    14:00:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected'
00:34:02.214   14:00:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:34:02.214   14:00:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true
00:34:02.214    14:00:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:34:02.214    14:00:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected'
00:34:02.214   14:00:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:34:02.214   14:00:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false
00:34:02.214    14:00:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:34:02.214    14:00:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible'
00:34:02.471   14:00:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]]
00:34:02.471   14:00:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false
00:34:02.471    14:00:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:34:02.471    14:00:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible'
00:34:02.729   14:00:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]]
00:34:02.729   14:00:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized
00:34:02.729   14:00:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible
00:34:02.987   14:00:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized
00:34:02.988   14:00:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1
00:34:04.361   14:00:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true
00:34:04.361   14:00:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false
00:34:04.361    14:00:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:34:04.361    14:00:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current'
00:34:04.361   14:00:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]]
00:34:04.361   14:00:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true
00:34:04.361    14:00:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:34:04.361    14:00:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current'
00:34:04.620   14:00:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:34:04.620   14:00:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true
00:34:04.620    14:00:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:34:04.620    14:00:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected'
00:34:04.620   14:00:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:34:04.620   14:00:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true
00:34:04.620    14:00:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:34:04.620    14:00:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected'
00:34:04.878   14:00:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:34:04.878   14:00:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false
00:34:04.878    14:00:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:34:04.878    14:00:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible'
00:34:05.137   14:00:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]]
00:34:05.137   14:00:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true
00:34:05.137    14:00:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:34:05.137    14:00:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible'
00:34:05.395   14:00:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:34:05.395   14:00:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active
00:34:05.395   14:00:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized
00:34:05.395   14:00:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized
00:34:05.653   14:00:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized
00:34:05.911   14:00:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1
00:34:06.845   14:00:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true
00:34:06.845   14:00:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true
00:34:06.845    14:00:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:34:06.845    14:00:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current'
00:34:07.103   14:00:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:34:07.103   14:00:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true
00:34:07.103    14:00:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:34:07.103    14:00:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current'
00:34:07.361   14:00:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:34:07.361   14:00:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true
00:34:07.361    14:00:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:34:07.361    14:00:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected'
00:34:07.361   14:00:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:34:07.361   14:00:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true
00:34:07.361    14:00:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:34:07.361    14:00:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected'
00:34:07.620   14:00:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:34:07.620   14:00:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true
00:34:07.620    14:00:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:34:07.620    14:00:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible'
00:34:07.878   14:00:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:34:07.878   14:00:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true
00:34:07.878    14:00:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:34:07.878    14:00:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible'
00:34:08.136   14:00:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:34:08.136   14:00:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized
00:34:08.136   14:00:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized
00:34:08.395   14:00:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized
00:34:08.395   14:00:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1
00:34:09.771   14:00:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true
00:34:09.771   14:00:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false
00:34:09.771    14:00:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:34:09.771    14:00:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current'
00:34:09.771   14:00:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]]
00:34:09.771   14:00:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true
00:34:09.771    14:00:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:34:09.771    14:00:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current'
00:34:09.771   14:00:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:34:09.771   14:00:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true
00:34:09.771    14:00:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:34:09.771    14:00:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected'
00:34:10.029   14:00:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:34:10.029   14:00:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true
00:34:10.029    14:00:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:34:10.029    14:00:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected'
00:34:10.288   14:00:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:34:10.288   14:00:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true
00:34:10.288    14:00:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:34:10.288    14:00:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible'
00:34:10.546   14:00:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:34:10.546   14:00:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true
00:34:10.546    14:00:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:34:10.546    14:00:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible'
00:34:10.546   14:00:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:34:10.546   14:00:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized
00:34:10.546   14:00:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized
00:34:10.804   14:00:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized
00:34:11.062   14:00:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1
00:34:11.997   14:00:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true
00:34:11.997   14:00:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true
00:34:11.997    14:00:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:34:11.997    14:00:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current'
00:34:12.255   14:00:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:34:12.255   14:00:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true
00:34:12.255    14:00:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:34:12.255    14:00:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current'
00:34:12.513   14:00:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:34:12.514   14:00:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true
00:34:12.514    14:00:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:34:12.514    14:00:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected'
00:34:12.514   14:00:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:34:12.514   14:00:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true
00:34:12.514    14:00:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:34:12.514    14:00:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected'
00:34:12.772   14:00:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:34:12.772   14:00:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true
00:34:12.772    14:00:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:34:12.772    14:00:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible'
00:34:13.031   14:00:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:34:13.031   14:00:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true
00:34:13.031    14:00:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:34:13.031    14:00:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible'
00:34:13.289   14:00:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:34:13.289   14:00:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible
00:34:13.289   14:00:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized
00:34:13.289   14:00:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible
00:34:13.548   14:00:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1
00:34:14.482   14:00:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false
00:34:14.482   14:00:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true
00:34:14.482    14:00:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:34:14.482    14:00:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current'
00:34:14.740   14:00:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:34:14.740   14:00:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false
00:34:14.740    14:00:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:34:14.740    14:00:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current'
00:34:14.999   14:00:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]]
00:34:14.999   14:00:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true
00:34:14.999    14:00:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:34:14.999    14:00:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected'
00:34:15.257   14:00:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:34:15.257   14:00:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true
00:34:15.257    14:00:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:34:15.257    14:00:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected'
00:34:15.257   14:00:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:34:15.257   14:00:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true
00:34:15.257    14:00:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible'
00:34:15.257    14:00:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:34:15.516   14:00:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:34:15.516   14:00:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false
00:34:15.516    14:00:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible'
00:34:15.516    14:00:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:34:15.774   14:00:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]]
00:34:15.774   14:00:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3507856
00:34:15.774   14:00:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3507856 ']'
00:34:15.774   14:00:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3507856
00:34:15.774    14:00:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname
00:34:15.774   14:00:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:34:15.774    14:00:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3507856
00:34:15.774   14:00:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:34:15.774   14:00:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:34:15.774   14:00:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3507856'
00:34:15.774  killing process with pid 3507856
00:34:15.774   14:00:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3507856
00:34:15.774   14:00:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3507856
00:34:15.774  {
00:34:15.774    "results": [
00:34:15.774      {
00:34:15.774        "job": "Nvme0n1",
00:34:15.774        "core_mask": "0x4",
00:34:15.774        "workload": "verify",
00:34:15.774        "status": "terminated",
00:34:15.774        "verify_range": {
00:34:15.774          "start": 0,
00:34:15.774          "length": 16384
00:34:15.774        },
00:34:15.774        "queue_depth": 128,
00:34:15.774        "io_size": 4096,
00:34:15.774        "runtime": 27.743203,
00:34:15.774        "iops": 13927.735741255254,
00:34:15.774        "mibps": 54.405217739278335,
00:34:15.774        "io_failed": 0,
00:34:15.774        "io_timeout": 0,
00:34:15.774        "avg_latency_us": 9168.043737573498,
00:34:15.774        "min_latency_us": 49.5616,
00:34:15.774        "max_latency_us": 3019898.88
00:34:15.774      }
00:34:15.774    ],
00:34:15.774    "core_count": 1
00:34:15.774  }
00:34:16.712   14:00:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3507856
00:34:16.712   14:00:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt
00:34:16.712  [2024-12-14 13:59:45.991869] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:34:16.712  [2024-12-14 13:59:45.991991] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3507856 ]
00:34:16.712  [2024-12-14 13:59:46.118726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:34:16.712  [2024-12-14 13:59:46.220524] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:34:16.712  Running I/O for 90 seconds...
00:34:16.712      15911.00 IOPS,    62.15 MiB/s
[2024-12-14T13:00:16.450Z]     16064.00 IOPS,    62.75 MiB/s
[2024-12-14T13:00:16.450Z]     16032.33 IOPS,    62.63 MiB/s
[2024-12-14T13:00:16.450Z]     16064.00 IOPS,    62.75 MiB/s
[2024-12-14T13:00:16.450Z]     16074.00 IOPS,    62.79 MiB/s
[2024-12-14T13:00:16.450Z]     16142.33 IOPS,    63.06 MiB/s
[2024-12-14T13:00:16.450Z]     16147.86 IOPS,    63.08 MiB/s
[2024-12-14T13:00:16.450Z]     16157.75 IOPS,    63.12 MiB/s
[2024-12-14T13:00:16.450Z]     16158.67 IOPS,    63.12 MiB/s
[2024-12-14T13:00:16.450Z]     16163.20 IOPS,    63.14 MiB/s
[2024-12-14T13:00:16.450Z]     16177.00 IOPS,    63.19 MiB/s
[2024-12-14T13:00:16.450Z]     16179.92 IOPS,    63.20 MiB/s
[2024-12-14T13:00:16.450Z] [2024-12-14 13:59:59.935036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:18696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.712  [2024-12-14 13:59:59.935097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0073 p:0 m:0 dnr:0
00:34:16.712  [2024-12-14 13:59:59.935448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:18704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.712  [2024-12-14 13:59:59.935472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0074 p:0 m:0 dnr:0
00:34:16.712  [2024-12-14 13:59:59.935494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004341000 len:0x1000 key:0x182f00
00:34:16.712  [2024-12-14 13:59:59.935510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0075 p:0 m:0 dnr:0
00:34:16.712  [2024-12-14 13:59:59.935527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:18304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433f000 len:0x1000 key:0x182f00
00:34:16.712  [2024-12-14 13:59:59.935542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0076 p:0 m:0 dnr:0
00:34:16.712  [2024-12-14 13:59:59.935558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:18312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433d000 len:0x1000 key:0x182f00
00:34:16.712  [2024-12-14 13:59:59.935577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:34:16.712  [2024-12-14 13:59:59.935593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:18320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433b000 len:0x1000 key:0x182f00
00:34:16.712  [2024-12-14 13:59:59.935608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0078 p:0 m:0 dnr:0
00:34:16.712  [2024-12-14 13:59:59.935624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:18328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004339000 len:0x1000 key:0x182f00
00:34:16.712  [2024-12-14 13:59:59.935639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0079 p:0 m:0 dnr:0
00:34:16.712  [2024-12-14 13:59:59.935655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:18336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004337000 len:0x1000 key:0x182f00
00:34:16.712  [2024-12-14 13:59:59.935670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007a p:0 m:0 dnr:0
00:34:16.712  [2024-12-14 13:59:59.935685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004335000 len:0x1000 key:0x182f00
00:34:16.712  [2024-12-14 13:59:59.935700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007b p:0 m:0 dnr:0
00:34:16.712  [2024-12-14 13:59:59.935723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:18352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004333000 len:0x1000 key:0x182f00
00:34:16.712  [2024-12-14 13:59:59.935737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0
00:34:16.712  [2024-12-14 13:59:59.935753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:18360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004331000 len:0x1000 key:0x182f00
00:34:16.712  [2024-12-14 13:59:59.935767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007d p:0 m:0 dnr:0
00:34:16.712  [2024-12-14 13:59:59.935783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:18368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432f000 len:0x1000 key:0x182f00
00:34:16.712  [2024-12-14 13:59:59.935797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007e p:0 m:0 dnr:0
00:34:16.712  [2024-12-14 13:59:59.935813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:18376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432d000 len:0x1000 key:0x182f00
00:34:16.712  [2024-12-14 13:59:59.935829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:34:16.712  [2024-12-14 13:59:59.935845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:18384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432b000 len:0x1000 key:0x182f00
00:34:16.712  [2024-12-14 13:59:59.935859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:34:16.712  [2024-12-14 13:59:59.935875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:18712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.712  [2024-12-14 13:59:59.935889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:34:16.712  [2024-12-14 13:59:59.935905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.712  [2024-12-14 13:59:59.935920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:34:16.712  [2024-12-14 13:59:59.935941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:18728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.712  [2024-12-14 13:59:59.935956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0003 p:0 m:0 dnr:0
00:34:16.712  [2024-12-14 13:59:59.935971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:18736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.712  [2024-12-14 13:59:59.935985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0004 p:0 m:0 dnr:0
00:34:16.712  [2024-12-14 13:59:59.936000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:18744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.712  [2024-12-14 13:59:59.936014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0005 p:0 m:0 dnr:0
00:34:16.713  [2024-12-14 13:59:59.936030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:18752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.713  [2024-12-14 13:59:59.936044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0006 p:0 m:0 dnr:0
00:34:16.713  [2024-12-14 13:59:59.936058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:18760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.713  [2024-12-14 13:59:59.936078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0007 p:0 m:0 dnr:0
00:34:16.713  [2024-12-14 13:59:59.936093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:18768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.713  [2024-12-14 13:59:59.936108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0008 p:0 m:0 dnr:0
00:34:16.713  [2024-12-14 13:59:59.936123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:18776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.713  [2024-12-14 13:59:59.936137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0009 p:0 m:0 dnr:0
00:34:16.713  [2024-12-14 13:59:59.936152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:18784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.713  [2024-12-14 13:59:59.936166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000a p:0 m:0 dnr:0
00:34:16.713  [2024-12-14 13:59:59.936181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:18792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.713  [2024-12-14 13:59:59.936196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000b p:0 m:0 dnr:0
00:34:16.713  [2024-12-14 13:59:59.936211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:18800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.713  [2024-12-14 13:59:59.936226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000c p:0 m:0 dnr:0
00:34:16.713  [2024-12-14 13:59:59.936241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:18808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.713  [2024-12-14 13:59:59.936256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000d p:0 m:0 dnr:0
00:34:16.713  [2024-12-14 13:59:59.936271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:18816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.713  [2024-12-14 13:59:59.936286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000e p:0 m:0 dnr:0
00:34:16.713  [2024-12-14 13:59:59.936301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:18824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.713  [2024-12-14 13:59:59.936317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000f p:0 m:0 dnr:0
00:34:16.713  [2024-12-14 13:59:59.936332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:18832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.713  [2024-12-14 13:59:59.936347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0010 p:0 m:0 dnr:0
00:34:16.713  [2024-12-14 13:59:59.936362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.713  [2024-12-14 13:59:59.936377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0011 p:0 m:0 dnr:0
00:34:16.713  [2024-12-14 13:59:59.936392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:18848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.713  [2024-12-14 13:59:59.936407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0012 p:0 m:0 dnr:0
00:34:16.713  [2024-12-14 13:59:59.936422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:18856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.713  [2024-12-14 13:59:59.936436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0013 p:0 m:0 dnr:0
00:34:16.713  [2024-12-14 13:59:59.936456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:18864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.713  [2024-12-14 13:59:59.936472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0014 p:0 m:0 dnr:0
00:34:16.713  [2024-12-14 13:59:59.936488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:18872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.713  [2024-12-14 13:59:59.936502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0015 p:0 m:0 dnr:0
00:34:16.713  [2024-12-14 13:59:59.936518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.713  [2024-12-14 13:59:59.936532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0016 p:0 m:0 dnr:0
00:34:16.713  [2024-12-14 13:59:59.936547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.713  [2024-12-14 13:59:59.936563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0017 p:0 m:0 dnr:0
00:34:16.713  [2024-12-14 13:59:59.936579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.713  [2024-12-14 13:59:59.936593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0018 p:0 m:0 dnr:0
00:34:16.713  [2024-12-14 13:59:59.936608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:18904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.713  [2024-12-14 13:59:59.936622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0019 p:0 m:0 dnr:0
00:34:16.713  [2024-12-14 13:59:59.936637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.713  [2024-12-14 13:59:59.936652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001a p:0 m:0 dnr:0
00:34:16.713  [2024-12-14 13:59:59.936667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.713  [2024-12-14 13:59:59.936682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001b p:0 m:0 dnr:0
00:34:16.713  [2024-12-14 13:59:59.936697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:18928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.713  [2024-12-14 13:59:59.936711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001c p:0 m:0 dnr:0
00:34:16.713  [2024-12-14 13:59:59.936726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:18936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.713  [2024-12-14 13:59:59.936740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001d p:0 m:0 dnr:0
00:34:16.713  [2024-12-14 13:59:59.936755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.713  [2024-12-14 13:59:59.936770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001e p:0 m:0 dnr:0
00:34:16.713  [2024-12-14 13:59:59.936785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:18952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.713  [2024-12-14 13:59:59.936801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:34:16.713  [2024-12-14 13:59:59.936819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:18960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.713  [2024-12-14 13:59:59.936835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0
00:34:16.713  [2024-12-14 13:59:59.936850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:18968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.713  [2024-12-14 13:59:59.936864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:34:16.713  [2024-12-14 13:59:59.936879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:18976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.713  [2024-12-14 13:59:59.936893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:34:16.713  [2024-12-14 13:59:59.936916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:18984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.713  [2024-12-14 13:59:59.936934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0023 p:0 m:0 dnr:0
00:34:16.713  [2024-12-14 13:59:59.936950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:18992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.713  [2024-12-14 13:59:59.936964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0024 p:0 m:0 dnr:0
00:34:16.713  [2024-12-14 13:59:59.936979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:19000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.713  [2024-12-14 13:59:59.936994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0025 p:0 m:0 dnr:0
00:34:16.713  [2024-12-14 13:59:59.937009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:19008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.713  [2024-12-14 13:59:59.937023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0026 p:0 m:0 dnr:0
00:34:16.713  [2024-12-14 13:59:59.937038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:19016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.713  [2024-12-14 13:59:59.937055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0027 p:0 m:0 dnr:0
00:34:16.713  [2024-12-14 13:59:59.937070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:19024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.713  [2024-12-14 13:59:59.937085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0028 p:0 m:0 dnr:0
00:34:16.713  [2024-12-14 13:59:59.937100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.713  [2024-12-14 13:59:59.937114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0029 p:0 m:0 dnr:0
00:34:16.713  [2024-12-14 13:59:59.937129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.713  [2024-12-14 13:59:59.937143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002a p:0 m:0 dnr:0
00:34:16.713  [2024-12-14 13:59:59.937158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.713  [2024-12-14 13:59:59.937172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002b p:0 m:0 dnr:0
00:34:16.713  [2024-12-14 13:59:59.937187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.713  [2024-12-14 13:59:59.937205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002c p:0 m:0 dnr:0
00:34:16.713  [2024-12-14 13:59:59.937221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.714  [2024-12-14 13:59:59.937235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002d p:0 m:0 dnr:0
00:34:16.714  [2024-12-14 13:59:59.937250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.714  [2024-12-14 13:59:59.937264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002e p:0 m:0 dnr:0
00:34:16.714  [2024-12-14 13:59:59.937279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:19080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.714  [2024-12-14 13:59:59.937296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002f p:0 m:0 dnr:0
00:34:16.714  [2024-12-14 13:59:59.937311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.714  [2024-12-14 13:59:59.937325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0030 p:0 m:0 dnr:0
00:34:16.714  [2024-12-14 13:59:59.937340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.714  [2024-12-14 13:59:59.937354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0031 p:0 m:0 dnr:0
00:34:16.714  [2024-12-14 13:59:59.937369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:19104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.714  [2024-12-14 13:59:59.937383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0032 p:0 m:0 dnr:0
00:34:16.714  [2024-12-14 13:59:59.937398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:19112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.714  [2024-12-14 13:59:59.937413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0033 p:0 m:0 dnr:0
00:34:16.714  [2024-12-14 13:59:59.937429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:19120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.714  [2024-12-14 13:59:59.937443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0034 p:0 m:0 dnr:0
00:34:16.714  [2024-12-14 13:59:59.937459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.714  [2024-12-14 13:59:59.937473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0035 p:0 m:0 dnr:0
00:34:16.714  [2024-12-14 13:59:59.937491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.714  [2024-12-14 13:59:59.937506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0036 p:0 m:0 dnr:0
00:34:16.714  [2024-12-14 13:59:59.937521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.714  [2024-12-14 13:59:59.937538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0037 p:0 m:0 dnr:0
00:34:16.714  [2024-12-14 13:59:59.937552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.714  [2024-12-14 13:59:59.937569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0038 p:0 m:0 dnr:0
00:34:16.714  [2024-12-14 13:59:59.937585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:19160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.714  [2024-12-14 13:59:59.937600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0039 p:0 m:0 dnr:0
00:34:16.714  [2024-12-14 13:59:59.937615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:19168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.714  [2024-12-14 13:59:59.937629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003a p:0 m:0 dnr:0
00:34:16.714  [2024-12-14 13:59:59.937644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.714  [2024-12-14 13:59:59.937688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003b p:0 m:0 dnr:0
00:34:16.714  [2024-12-14 13:59:59.937703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.714  [2024-12-14 13:59:59.937718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003c p:0 m:0 dnr:0
00:34:16.714  [2024-12-14 13:59:59.937733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:19192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.714  [2024-12-14 13:59:59.937747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003d p:0 m:0 dnr:0
00:34:16.714  [2024-12-14 13:59:59.937762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:19200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.714  [2024-12-14 13:59:59.937776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003e p:0 m:0 dnr:0
00:34:16.714  [2024-12-14 13:59:59.937792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.714  [2024-12-14 13:59:59.937808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:34:16.714  [2024-12-14 13:59:59.937823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.714  [2024-12-14 13:59:59.937837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0040 p:0 m:0 dnr:0
00:34:16.714  [2024-12-14 13:59:59.937852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:19224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.714  [2024-12-14 13:59:59.937866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:34:16.714  [2024-12-14 13:59:59.937881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.714  [2024-12-14 13:59:59.937896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:34:16.714  [2024-12-14 13:59:59.937911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.714  [2024-12-14 13:59:59.937925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0043 p:0 m:0 dnr:0
00:34:16.714  [2024-12-14 13:59:59.937946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.714  [2024-12-14 13:59:59.937960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0044 p:0 m:0 dnr:0
00:34:16.714  [2024-12-14 13:59:59.937977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:19256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.714  [2024-12-14 13:59:59.937992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0045 p:0 m:0 dnr:0
00:34:16.714  [2024-12-14 13:59:59.938010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.714  [2024-12-14 13:59:59.938025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0046 p:0 m:0 dnr:0
00:34:16.714  [2024-12-14 13:59:59.938039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.714  [2024-12-14 13:59:59.938056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0047 p:0 m:0 dnr:0
00:34:16.714  [2024-12-14 13:59:59.938071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.714  [2024-12-14 13:59:59.938085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0048 p:0 m:0 dnr:0
00:34:16.714  [2024-12-14 13:59:59.938100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.714  [2024-12-14 13:59:59.938114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0049 p:0 m:0 dnr:0
00:34:16.714  [2024-12-14 13:59:59.938129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.714  [2024-12-14 13:59:59.938144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004a p:0 m:0 dnr:0
00:34:16.714  [2024-12-14 13:59:59.938158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.714  [2024-12-14 13:59:59.938173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004b p:0 m:0 dnr:0
00:34:16.714  [2024-12-14 13:59:59.938189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:18392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004365000 len:0x1000 key:0x182f00
00:34:16.714  [2024-12-14 13:59:59.938203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004c p:0 m:0 dnr:0
00:34:16.714  [2024-12-14 13:59:59.938218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:18400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004367000 len:0x1000 key:0x182f00
00:34:16.714  [2024-12-14 13:59:59.938232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004d p:0 m:0 dnr:0
00:34:16.714  [2024-12-14 13:59:59.938248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:18408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004323000 len:0x1000 key:0x182f00
00:34:16.714  [2024-12-14 13:59:59.938262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004e p:0 m:0 dnr:0
00:34:16.714  [2024-12-14 13:59:59.938277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:18416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004321000 len:0x1000 key:0x182f00
00:34:16.714  [2024-12-14 13:59:59.938294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004f p:0 m:0 dnr:0
00:34:16.714  [2024-12-14 13:59:59.938310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:18424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431f000 len:0x1000 key:0x182f00
00:34:16.714  [2024-12-14 13:59:59.938326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0050 p:0 m:0 dnr:0
00:34:16.714  [2024-12-14 13:59:59.938341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:18432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d9000 len:0x1000 key:0x182f00
00:34:16.714  [2024-12-14 13:59:59.938357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0051 p:0 m:0 dnr:0
00:34:16.714  [2024-12-14 13:59:59.938372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.714  [2024-12-14 13:59:59.938387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0052 p:0 m:0 dnr:0
00:34:16.715  [2024-12-14 13:59:59.938402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:18440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431d000 len:0x1000 key:0x182f00
00:34:16.715  [2024-12-14 13:59:59.938417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0053 p:0 m:0 dnr:0
00:34:16.715  [2024-12-14 13:59:59.938432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:18448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431b000 len:0x1000 key:0x182f00
00:34:16.715  [2024-12-14 13:59:59.938447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0054 p:0 m:0 dnr:0
00:34:16.715  [2024-12-14 13:59:59.938462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:18456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004319000 len:0x1000 key:0x182f00
00:34:16.715  [2024-12-14 13:59:59.938476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0055 p:0 m:0 dnr:0
00:34:16.715  [2024-12-14 13:59:59.938492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:18464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004317000 len:0x1000 key:0x182f00
00:34:16.715  [2024-12-14 13:59:59.938507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0056 p:0 m:0 dnr:0
00:34:16.715  [2024-12-14 13:59:59.938522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:18472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004315000 len:0x1000 key:0x182f00
00:34:16.715  [2024-12-14 13:59:59.938542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0057 p:0 m:0 dnr:0
00:34:16.715  [2024-12-14 13:59:59.938557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:18480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004313000 len:0x1000 key:0x182f00
00:34:16.715  [2024-12-14 13:59:59.938572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0058 p:0 m:0 dnr:0
00:34:16.715  [2024-12-14 13:59:59.938587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:18488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004311000 len:0x1000 key:0x182f00
00:34:16.715  [2024-12-14 13:59:59.938601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0059 p:0 m:0 dnr:0
00:34:16.715  [2024-12-14 13:59:59.938617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:18496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430f000 len:0x1000 key:0x182f00
00:34:16.715  [2024-12-14 13:59:59.938631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005a p:0 m:0 dnr:0
00:34:16.715  [2024-12-14 13:59:59.938646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:18504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430d000 len:0x1000 key:0x182f00
00:34:16.715  [2024-12-14 13:59:59.938661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005b p:0 m:0 dnr:0
00:34:16.715  [2024-12-14 13:59:59.938678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:18512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430b000 len:0x1000 key:0x182f00
00:34:16.715  [2024-12-14 13:59:59.938692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005c p:0 m:0 dnr:0
00:34:16.715  [2024-12-14 13:59:59.938708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:18520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004309000 len:0x1000 key:0x182f00
00:34:16.715  [2024-12-14 13:59:59.938724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005d p:0 m:0 dnr:0
00:34:16.715  [2024-12-14 13:59:59.938739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:18528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004307000 len:0x1000 key:0x182f00
00:34:16.715  [2024-12-14 13:59:59.938754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005e p:0 m:0 dnr:0
00:34:16.715  [2024-12-14 13:59:59.938769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:18536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004305000 len:0x1000 key:0x182f00
00:34:16.715  [2024-12-14 13:59:59.938785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:34:16.715  [2024-12-14 13:59:59.938801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004303000 len:0x1000 key:0x182f00
00:34:16.715  [2024-12-14 13:59:59.938815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0060 p:0 m:0 dnr:0
00:34:16.715  [2024-12-14 13:59:59.938830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:18552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004301000 len:0x1000 key:0x182f00
00:34:16.715  [2024-12-14 13:59:59.938845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:34:16.715  [2024-12-14 13:59:59.938860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000042ff000 len:0x1000 key:0x182f00
00:34:16.715  [2024-12-14 13:59:59.938874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:34:16.715  [2024-12-14 13:59:59.938895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:18568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438b000 len:0x1000 key:0x182f00
00:34:16.715  [2024-12-14 13:59:59.938910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0063 p:0 m:0 dnr:0
00:34:16.715  [2024-12-14 13:59:59.938925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:18576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438d000 len:0x1000 key:0x182f00
00:34:16.715  [2024-12-14 13:59:59.938944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0064 p:0 m:0 dnr:0
00:34:16.715  [2024-12-14 13:59:59.938959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d5000 len:0x1000 key:0x182f00
00:34:16.715  [2024-12-14 13:59:59.938973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0065 p:0 m:0 dnr:0
00:34:16.715  [2024-12-14 13:59:59.938989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004325000 len:0x1000 key:0x182f00
00:34:16.715  [2024-12-14 13:59:59.939003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0066 p:0 m:0 dnr:0
00:34:16.715  [2024-12-14 13:59:59.939018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:18600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004327000 len:0x1000 key:0x182f00
00:34:16.715  [2024-12-14 13:59:59.939036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0067 p:0 m:0 dnr:0
00:34:16.715  [2024-12-14 13:59:59.939052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:18608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004329000 len:0x1000 key:0x182f00
00:34:16.715  [2024-12-14 13:59:59.939068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0068 p:0 m:0 dnr:0
00:34:16.715  [2024-12-14 13:59:59.939083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:18616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004389000 len:0x1000 key:0x182f00
00:34:16.715  [2024-12-14 13:59:59.939097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0069 p:0 m:0 dnr:0
00:34:16.715  [2024-12-14 13:59:59.939112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004387000 len:0x1000 key:0x182f00
00:34:16.715  [2024-12-14 13:59:59.939127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006a p:0 m:0 dnr:0
00:34:16.715  [2024-12-14 13:59:59.939142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:18632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004385000 len:0x1000 key:0x182f00
00:34:16.715  [2024-12-14 13:59:59.939156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006b p:0 m:0 dnr:0
00:34:16.715  [2024-12-14 13:59:59.939171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:18640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004383000 len:0x1000 key:0x182f00
00:34:16.715  [2024-12-14 13:59:59.939186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006c p:0 m:0 dnr:0
00:34:16.715  [2024-12-14 13:59:59.939201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:18648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004381000 len:0x1000 key:0x182f00
00:34:16.715  [2024-12-14 13:59:59.939216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006d p:0 m:0 dnr:0
00:34:16.715  [2024-12-14 13:59:59.939232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:18656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437f000 len:0x1000 key:0x182f00
00:34:16.715  [2024-12-14 13:59:59.939246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006e p:0 m:0 dnr:0
00:34:16.715  [2024-12-14 13:59:59.939261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437d000 len:0x1000 key:0x182f00
00:34:16.715  [2024-12-14 13:59:59.939278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006f p:0 m:0 dnr:0
00:34:16.715  [2024-12-14 13:59:59.939293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:18672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437b000 len:0x1000 key:0x182f00
00:34:16.715  [2024-12-14 13:59:59.939307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0070 p:0 m:0 dnr:0
00:34:16.715  [2024-12-14 13:59:59.939323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:18680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004379000 len:0x1000 key:0x182f00
00:34:16.715  [2024-12-14 13:59:59.939337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0071 p:0 m:0 dnr:0
00:34:16.715  [2024-12-14 13:59:59.939352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004377000 len:0x1000 key:0x182f00
00:34:16.715  [2024-12-14 13:59:59.939368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0072 p:0 m:0 dnr:0
00:34:16.715      15299.62 IOPS,    59.76 MiB/s
[2024-12-14T13:00:16.453Z]     14206.79 IOPS,    55.50 MiB/s
[2024-12-14T13:00:16.453Z]     13259.67 IOPS,    51.80 MiB/s
[2024-12-14T13:00:16.453Z]     13143.50 IOPS,    51.34 MiB/s
[2024-12-14T13:00:16.453Z]     13319.59 IOPS,    52.03 MiB/s
[2024-12-14T13:00:16.453Z]     13417.94 IOPS,    52.41 MiB/s
[2024-12-14T13:00:16.453Z]     13424.11 IOPS,    52.44 MiB/s
[2024-12-14T13:00:16.453Z]     13427.00 IOPS,    52.45 MiB/s
[2024-12-14T13:00:16.453Z]     13538.29 IOPS,    52.88 MiB/s
[2024-12-14T13:00:16.453Z]     13661.05 IOPS,    53.36 MiB/s
[2024-12-14T13:00:16.453Z]     13760.13 IOPS,    53.75 MiB/s
[2024-12-14T13:00:16.453Z]     13748.58 IOPS,    53.71 MiB/s
[2024-12-14T13:00:16.453Z]     13735.48 IOPS,    53.65 MiB/s
[2024-12-14T13:00:16.453Z] [2024-12-14 14:00:13.162953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:47904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004315000 len:0x1000 key:0x182f00
00:34:16.715  [2024-12-14 14:00:13.163015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006a p:0 m:0 dnr:0
00:34:16.715  [2024-12-14 14:00:13.163083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:47928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004319000 len:0x1000 key:0x182f00
00:34:16.715  [2024-12-14 14:00:13.163100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006b p:0 m:0 dnr:0
00:34:16.715  [2024-12-14 14:00:13.163117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:47952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004305000 len:0x1000 key:0x182f00
00:34:16.716  [2024-12-14 14:00:13.163133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006c p:0 m:0 dnr:0
00:34:16.716  [2024-12-14 14:00:13.163149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:48528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.716  [2024-12-14 14:00:13.163164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006d p:0 m:0 dnr:0
00:34:16.716  [2024-12-14 14:00:13.163180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:48536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.716  [2024-12-14 14:00:13.163198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006e p:0 m:0 dnr:0
00:34:16.716  [2024-12-14 14:00:13.163214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:48544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.716  [2024-12-14 14:00:13.163228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006f p:0 m:0 dnr:0
00:34:16.716  [2024-12-14 14:00:13.163243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:48008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438d000 len:0x1000 key:0x182f00
00:34:16.716  [2024-12-14 14:00:13.163261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0070 p:0 m:0 dnr:0
00:34:16.716  [2024-12-14 14:00:13.163276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:48032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b1000 len:0x1000 key:0x182f00
00:34:16.716  [2024-12-14 14:00:13.163292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0071 p:0 m:0 dnr:0
00:34:16.716  [2024-12-14 14:00:13.163677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:48040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d7000 len:0x1000 key:0x182f00
00:34:16.716  [2024-12-14 14:00:13.163698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0072 p:0 m:0 dnr:0
00:34:16.716  [2024-12-14 14:00:13.163717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:48064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043af000 len:0x1000 key:0x182f00
00:34:16.716  [2024-12-14 14:00:13.163732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0073 p:0 m:0 dnr:0
00:34:16.716  [2024-12-14 14:00:13.163752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:48072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004373000 len:0x1000 key:0x182f00
00:34:16.716  [2024-12-14 14:00:13.163766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0074 p:0 m:0 dnr:0
00:34:16.716  [2024-12-14 14:00:13.163782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:48088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004335000 len:0x1000 key:0x182f00
00:34:16.716  [2024-12-14 14:00:13.163796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0075 p:0 m:0 dnr:0
00:34:16.716  [2024-12-14 14:00:13.163811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:48112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004383000 len:0x1000 key:0x182f00
00:34:16.716  [2024-12-14 14:00:13.163828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0076 p:0 m:0 dnr:0
00:34:16.716  [2024-12-14 14:00:13.163844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:48144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a5000 len:0x1000 key:0x182f00
00:34:16.716  [2024-12-14 14:00:13.163858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:34:16.716  [2024-12-14 14:00:13.163873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:48160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004355000 len:0x1000 key:0x182f00
00:34:16.716  [2024-12-14 14:00:13.163887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0078 p:0 m:0 dnr:0
00:34:16.716  [2024-12-14 14:00:13.163903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:48592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.716  [2024-12-14 14:00:13.163917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0079 p:0 m:0 dnr:0
00:34:16.716  [2024-12-14 14:00:13.163937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:48600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.716  [2024-12-14 14:00:13.163952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007a p:0 m:0 dnr:0
00:34:16.716  [2024-12-14 14:00:13.163968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:48608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.716  [2024-12-14 14:00:13.163983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007b p:0 m:0 dnr:0
00:34:16.716  [2024-12-14 14:00:13.163998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:48000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c9000 len:0x1000 key:0x182f00
00:34:16.716  [2024-12-14 14:00:13.164013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007c p:0 m:0 dnr:0
00:34:16.716  [2024-12-14 14:00:13.164028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:48624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.716  [2024-12-14 14:00:13.164044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007d p:0 m:0 dnr:0
00:34:16.716  [2024-12-14 14:00:13.164059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:48632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.716  [2024-12-14 14:00:13.164077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007e p:0 m:0 dnr:0
00:34:16.716  [2024-12-14 14:00:13.164092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:48048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004391000 len:0x1000 key:0x182f00
00:34:16.716  [2024-12-14 14:00:13.164108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:34:16.716  [2024-12-14 14:00:13.164123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:48656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.716  [2024-12-14 14:00:13.164137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:34:16.716  [2024-12-14 14:00:13.164153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:48080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004375000 len:0x1000 key:0x182f00
00:34:16.716  [2024-12-14 14:00:13.164167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:34:16.716  [2024-12-14 14:00:13.164182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:48680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.716  [2024-12-14 14:00:13.164196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:34:16.716  [2024-12-14 14:00:13.164212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:48120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d1000 len:0x1000 key:0x182f00
00:34:16.716  [2024-12-14 14:00:13.164226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0003 p:0 m:0 dnr:0
00:34:16.716  [2024-12-14 14:00:13.164243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:48688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.716  [2024-12-14 14:00:13.164257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0004 p:0 m:0 dnr:0
00:34:16.716  [2024-12-14 14:00:13.164273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:48168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004333000 len:0x1000 key:0x182f00
00:34:16.716  [2024-12-14 14:00:13.164287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0005 p:0 m:0 dnr:0
00:34:16.716  [2024-12-14 14:00:13.164303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:48184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434d000 len:0x1000 key:0x182f00
00:34:16.716  [2024-12-14 14:00:13.164319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0006 p:0 m:0 dnr:0
00:34:16.716  [2024-12-14 14:00:13.164335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:48216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a3000 len:0x1000 key:0x182f00
00:34:16.716  [2024-12-14 14:00:13.164349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0007 p:0 m:0 dnr:0
00:34:16.716  [2024-12-14 14:00:13.164364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:48224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e3000 len:0x1000 key:0x182f00
00:34:16.716  [2024-12-14 14:00:13.164378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0008 p:0 m:0 dnr:0
00:34:16.716  [2024-12-14 14:00:13.164393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:48240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436b000 len:0x1000 key:0x182f00
00:34:16.716  [2024-12-14 14:00:13.164417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0009 p:0 m:0 dnr:0
00:34:16.716  [2024-12-14 14:00:13.164432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:48720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.716  [2024-12-14 14:00:13.164446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000a p:0 m:0 dnr:0
00:34:16.716  [2024-12-14 14:00:13.164464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:48736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.716  [2024-12-14 14:00:13.164478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000b p:0 m:0 dnr:0
00:34:16.716  [2024-12-14 14:00:13.164494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:48272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004343000 len:0x1000 key:0x182f00
00:34:16.716  [2024-12-14 14:00:13.164508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000c p:0 m:0 dnr:0
00:34:16.716  [2024-12-14 14:00:13.164523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:48296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004379000 len:0x1000 key:0x182f00
00:34:16.716  [2024-12-14 14:00:13.164538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000d p:0 m:0 dnr:0
00:34:16.716  [2024-12-14 14:00:13.164553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:48328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433b000 len:0x1000 key:0x182f00
00:34:16.716  [2024-12-14 14:00:13.164570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0
00:34:16.716  [2024-12-14 14:00:13.164585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004311000 len:0x1000 key:0x182f00
00:34:16.716  [2024-12-14 14:00:13.164599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0
00:34:16.716  [2024-12-14 14:00:13.164614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:48368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004385000 len:0x1000 key:0x182f00
00:34:16.716  [2024-12-14 14:00:13.164629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0010 p:0 m:0 dnr:0
00:34:16.716  [2024-12-14 14:00:13.164644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:48768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.716  [2024-12-14 14:00:13.164659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0011 p:0 m:0 dnr:0
00:34:16.716  [2024-12-14 14:00:13.164745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:48400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a9000 len:0x1000 key:0x182f00
00:34:16.716  [2024-12-14 14:00:13.164763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0012 p:0 m:0 dnr:0
00:34:16.716  [2024-12-14 14:00:13.164778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:48776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.716  [2024-12-14 14:00:13.164792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0013 p:0 m:0 dnr:0
00:34:16.716  [2024-12-14 14:00:13.164808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:48440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437b000 len:0x1000 key:0x182f00
00:34:16.716  [2024-12-14 14:00:13.164822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0014 p:0 m:0 dnr:0
00:34:16.716  [2024-12-14 14:00:13.164837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:48448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004309000 len:0x1000 key:0x182f00
00:34:16.716  [2024-12-14 14:00:13.164853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0015 p:0 m:0 dnr:0
00:34:16.716  [2024-12-14 14:00:13.164869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:48208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a7000 len:0x1000 key:0x182f00
00:34:16.717  [2024-12-14 14:00:13.164888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0016 p:0 m:0 dnr:0
00:34:16.717  [2024-12-14 14:00:13.164903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:48800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.717  [2024-12-14 14:00:13.164917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0017 p:0 m:0 dnr:0
00:34:16.717  [2024-12-14 14:00:13.164937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:48816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.717  [2024-12-14 14:00:13.164952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0018 p:0 m:0 dnr:0
00:34:16.717  [2024-12-14 14:00:13.164967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:48832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.717  [2024-12-14 14:00:13.164981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0019 p:0 m:0 dnr:0
00:34:16.717  [2024-12-14 14:00:13.164996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:48248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e1000 len:0x1000 key:0x182f00
00:34:16.717  [2024-12-14 14:00:13.165011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001a p:0 m:0 dnr:0
00:34:16.717  [2024-12-14 14:00:13.165027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:48264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432b000 len:0x1000 key:0x182f00
00:34:16.717  [2024-12-14 14:00:13.165040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001b p:0 m:0 dnr:0
00:34:16.717  [2024-12-14 14:00:13.165056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:48288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043dd000 len:0x1000 key:0x182f00
00:34:16.717  [2024-12-14 14:00:13.165071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001c p:0 m:0 dnr:0
00:34:16.717  [2024-12-14 14:00:13.165086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:48320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004395000 len:0x1000 key:0x182f00
00:34:16.717  [2024-12-14 14:00:13.165100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001d p:0 m:0 dnr:0
00:34:16.717  [2024-12-14 14:00:13.165115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:48856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.717  [2024-12-14 14:00:13.165132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001e p:0 m:0 dnr:0
00:34:16.717  [2024-12-14 14:00:13.165147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:48360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430b000 len:0x1000 key:0x182f00
00:34:16.717  [2024-12-14 14:00:13.165162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:34:16.717  [2024-12-14 14:00:13.165177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:48376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437f000 len:0x1000 key:0x182f00
00:34:16.717  [2024-12-14 14:00:13.165191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0020 p:0 m:0 dnr:0
00:34:16.717  [2024-12-14 14:00:13.165206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:48392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430d000 len:0x1000 key:0x182f00
00:34:16.717  [2024-12-14 14:00:13.165222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:34:16.717  [2024-12-14 14:00:13.165239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:48896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.717  [2024-12-14 14:00:13.165253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:34:16.717  [2024-12-14 14:00:13.165268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:48432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439f000 len:0x1000 key:0x182f00
00:34:16.717  [2024-12-14 14:00:13.165283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0023 p:0 m:0 dnr:0
00:34:16.717  [2024-12-14 14:00:13.165298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:48912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.717  [2024-12-14 14:00:13.165313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0024 p:0 m:0 dnr:0
00:34:16.717  [2024-12-14 14:00:13.165328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:48480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433f000 len:0x1000 key:0x182f00
00:34:16.717  [2024-12-14 14:00:13.165342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0025 p:0 m:0 dnr:0
00:34:16.717  [2024-12-14 14:00:13.165357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:48504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004363000 len:0x1000 key:0x182f00
00:34:16.717  [2024-12-14 14:00:13.165374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0026 p:0 m:0 dnr:0
00:34:16.717  [2024-12-14 14:00:13.165389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:48456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ab000 len:0x1000 key:0x182f00
00:34:16.717  [2024-12-14 14:00:13.165403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0027 p:0 m:0 dnr:0
00:34:16.717  [2024-12-14 14:00:13.165419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:48488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004353000 len:0x1000 key:0x182f00
00:34:16.717  [2024-12-14 14:00:13.165434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0028 p:0 m:0 dnr:0
00:34:16.717  [2024-12-14 14:00:13.165449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:48928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:34:16.717  [2024-12-14 14:00:13.165463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0029 p:0 m:0 dnr:0
00:34:16.717      13780.04 IOPS,    53.83 MiB/s
[2024-12-14T13:00:16.455Z]     13870.81 IOPS,    54.18 MiB/s
[2024-12-14T13:00:16.455Z] Received shutdown signal, test time was about 27.743866 seconds
00:34:16.717  
00:34:16.717                                                                                                  Latency(us)
00:34:16.717  
[2024-12-14T13:00:16.455Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:34:16.717  Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096)
00:34:16.717  	 Verification LBA range: start 0x0 length 0x4000
00:34:16.717  	 Nvme0n1             :      27.74   13927.74      54.41       0.00     0.00    9168.04      49.56 3019898.88
00:34:16.717  
[2024-12-14T13:00:16.455Z]  ===================================================================================================================
00:34:16.717  
[2024-12-14T13:00:16.455Z]  Total                       :              13927.74      54.41       0.00     0.00    9168.04      49.56 3019898.88
00:34:16.717   14:00:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:34:16.975   14:00:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT
00:34:16.975   14:00:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt
00:34:16.975   14:00:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini
00:34:16.975   14:00:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup
00:34:16.975   14:00:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync
00:34:16.975   14:00:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' rdma == tcp ']'
00:34:16.975   14:00:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' rdma == rdma ']'
00:34:16.975   14:00:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e
00:34:16.975   14:00:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20}
00:34:16.975   14:00:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma
00:34:16.975  rmmod nvme_rdma
00:34:16.975  rmmod nvme_fabrics
00:34:16.975   14:00:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:34:16.975   14:00:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e
00:34:16.975   14:00:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0
00:34:16.975   14:00:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 3507502 ']'
00:34:16.975   14:00:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 3507502
00:34:16.975   14:00:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3507502 ']'
00:34:16.975   14:00:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3507502
00:34:16.975    14:00:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname
00:34:16.975   14:00:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:34:16.975    14:00:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3507502
00:34:16.975   14:00:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:34:16.976   14:00:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:34:16.976   14:00:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3507502'
00:34:16.976  killing process with pid 3507502
00:34:16.976   14:00:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3507502
00:34:16.976   14:00:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3507502
00:34:18.878   14:00:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:34:18.878   14:00:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]]
00:34:18.878  
00:34:18.878  real	0m41.615s
00:34:18.878  user	1m55.931s
00:34:18.878  sys	0m9.541s
00:34:18.878   14:00:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable
00:34:18.878   14:00:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x
00:34:18.878  ************************************
00:34:18.878  END TEST nvmf_host_multipath_status
00:34:18.878  ************************************
00:34:18.878   14:00:18 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma
00:34:18.878   14:00:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:34:18.878   14:00:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable
00:34:18.878   14:00:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:34:18.878  ************************************
00:34:18.878  START TEST nvmf_discovery_remove_ifc
00:34:18.878  ************************************
00:34:18.878   14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma
00:34:18.878  * Looking for test storage...
00:34:18.878  * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host
00:34:18.878    14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:34:18.878     14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version
00:34:18.878     14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:34:18.878    14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:34:18.878    14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:34:18.878    14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:34:18.878    14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:34:18.878    14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-:
00:34:18.878    14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1
00:34:18.878    14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-:
00:34:18.878    14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2
00:34:18.878    14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<'
00:34:18.878    14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2
00:34:18.878    14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1
00:34:18.878    14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:34:18.878    14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in
00:34:18.878    14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1
00:34:18.878    14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 ))
00:34:18.878    14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:34:18.878     14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1
00:34:18.878     14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1
00:34:18.878     14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:34:18.878     14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1
00:34:18.879    14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1
00:34:18.879     14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2
00:34:18.879     14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2
00:34:18.879     14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:34:18.879     14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2
00:34:18.879    14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2
00:34:18.879    14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:34:18.879    14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:34:18.879    14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0
00:34:18.879    14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:34:18.879    14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:34:18.879  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:34:18.879  		--rc genhtml_branch_coverage=1
00:34:18.879  		--rc genhtml_function_coverage=1
00:34:18.879  		--rc genhtml_legend=1
00:34:18.879  		--rc geninfo_all_blocks=1
00:34:18.879  		--rc geninfo_unexecuted_blocks=1
00:34:18.879  		
00:34:18.879  		'
00:34:18.879    14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:34:18.879  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:34:18.879  		--rc genhtml_branch_coverage=1
00:34:18.879  		--rc genhtml_function_coverage=1
00:34:18.879  		--rc genhtml_legend=1
00:34:18.879  		--rc geninfo_all_blocks=1
00:34:18.879  		--rc geninfo_unexecuted_blocks=1
00:34:18.879  		
00:34:18.879  		'
00:34:18.879    14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:34:18.879  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:34:18.879  		--rc genhtml_branch_coverage=1
00:34:18.879  		--rc genhtml_function_coverage=1
00:34:18.879  		--rc genhtml_legend=1
00:34:18.879  		--rc geninfo_all_blocks=1
00:34:18.879  		--rc geninfo_unexecuted_blocks=1
00:34:18.879  		
00:34:18.879  		'
00:34:18.879    14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:34:18.879  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:34:18.879  		--rc genhtml_branch_coverage=1
00:34:18.879  		--rc genhtml_function_coverage=1
00:34:18.879  		--rc genhtml_legend=1
00:34:18.879  		--rc geninfo_all_blocks=1
00:34:18.879  		--rc geninfo_unexecuted_blocks=1
00:34:18.879  		
00:34:18.879  		'
00:34:18.879   14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh
00:34:18.879     14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s
00:34:18.879    14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:34:18.879    14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:34:18.879    14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:34:18.879    14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:34:18.879    14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:34:18.879    14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:34:18.879    14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:34:18.879    14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:34:18.879    14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:34:18.879     14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:34:18.879    14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:34:18.879    14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e
00:34:18.879    14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:34:18.879    14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:34:18.879    14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:34:18.879    14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:34:18.879    14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh
00:34:18.879     14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob
00:34:18.879     14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:34:18.879     14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:34:18.879     14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:34:18.879      14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:34:18.879      14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:34:18.879      14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:34:18.879      14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH
00:34:18.879      14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:34:18.879    14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0
00:34:18.879    14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:34:18.879    14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:34:18.879    14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:34:18.879    14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:34:18.879    14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:34:18.879    14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:34:18.879  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:34:18.879    14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:34:18.879    14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:34:18.879    14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0
00:34:18.879   14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']'
00:34:18.879   14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.'
00:34:18.879  Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.
00:34:18.879   14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@16 -- # exit 0
00:34:18.879  
00:34:18.879  real	0m0.205s
00:34:18.879  user	0m0.119s
00:34:18.879  sys	0m0.104s
00:34:18.879   14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:34:18.879   14:00:18 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:34:18.879  ************************************
00:34:18.879  END TEST nvmf_discovery_remove_ifc
00:34:18.879  ************************************
00:34:18.879   14:00:18 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma
00:34:18.879   14:00:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:34:18.879   14:00:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable
00:34:18.879   14:00:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:34:18.879  ************************************
00:34:18.879  START TEST nvmf_identify_kernel_target
00:34:18.879  ************************************
00:34:18.879   14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma
00:34:18.879  * Looking for test storage...
00:34:18.879  * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host
00:34:18.879    14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:34:18.879     14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version
00:34:18.879     14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:34:19.138    14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:34:19.138    14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:34:19.139    14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l
00:34:19.139    14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l
00:34:19.139    14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-:
00:34:19.139    14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1
00:34:19.139    14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-:
00:34:19.139    14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2
00:34:19.139    14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<'
00:34:19.139    14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2
00:34:19.139    14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1
00:34:19.139    14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:34:19.139    14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in
00:34:19.139    14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1
00:34:19.139    14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 ))
00:34:19.139    14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:34:19.139     14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1
00:34:19.139     14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1
00:34:19.139     14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:34:19.139     14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1
00:34:19.139    14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1
00:34:19.139     14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2
00:34:19.139     14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2
00:34:19.139     14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:34:19.139     14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2
00:34:19.139    14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2
00:34:19.139    14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:34:19.139    14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:34:19.139    14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0
00:34:19.139    14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:34:19.139    14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:34:19.139  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:34:19.139  		--rc genhtml_branch_coverage=1
00:34:19.139  		--rc genhtml_function_coverage=1
00:34:19.139  		--rc genhtml_legend=1
00:34:19.139  		--rc geninfo_all_blocks=1
00:34:19.139  		--rc geninfo_unexecuted_blocks=1
00:34:19.139  		
00:34:19.139  		'
00:34:19.139    14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:34:19.139  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:34:19.139  		--rc genhtml_branch_coverage=1
00:34:19.139  		--rc genhtml_function_coverage=1
00:34:19.139  		--rc genhtml_legend=1
00:34:19.139  		--rc geninfo_all_blocks=1
00:34:19.139  		--rc geninfo_unexecuted_blocks=1
00:34:19.139  		
00:34:19.139  		'
00:34:19.139    14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:34:19.139  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:34:19.139  		--rc genhtml_branch_coverage=1
00:34:19.139  		--rc genhtml_function_coverage=1
00:34:19.139  		--rc genhtml_legend=1
00:34:19.139  		--rc geninfo_all_blocks=1
00:34:19.139  		--rc geninfo_unexecuted_blocks=1
00:34:19.139  		
00:34:19.139  		'
00:34:19.139    14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:34:19.139  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:34:19.139  		--rc genhtml_branch_coverage=1
00:34:19.139  		--rc genhtml_function_coverage=1
00:34:19.139  		--rc genhtml_legend=1
00:34:19.139  		--rc geninfo_all_blocks=1
00:34:19.139  		--rc geninfo_unexecuted_blocks=1
00:34:19.139  		
00:34:19.139  		'
00:34:19.139   14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh
00:34:19.139     14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s
00:34:19.139    14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:34:19.139    14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:34:19.139    14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:34:19.139    14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:34:19.139    14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:34:19.139    14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:34:19.139    14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:34:19.139    14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:34:19.139    14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:34:19.139     14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:34:19.139    14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:34:19.139    14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e
00:34:19.139    14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:34:19.139    14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:34:19.139    14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:34:19.139    14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:34:19.139    14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh
00:34:19.139     14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob
00:34:19.139     14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:34:19.139     14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:34:19.139     14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:34:19.139      14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:34:19.139      14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:34:19.139      14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:34:19.139      14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH
00:34:19.139      14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:34:19.139    14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0
00:34:19.139    14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:34:19.139    14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:34:19.139    14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:34:19.139    14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:34:19.139    14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:34:19.139    14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:34:19.139  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:34:19.139    14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:34:19.139    14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:34:19.139    14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0
00:34:19.139   14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit
00:34:19.139   14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z rdma ']'
00:34:19.139   14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:34:19.139   14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs
00:34:19.139   14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no
00:34:19.139   14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns
00:34:19.139   14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:34:19.140   14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:34:19.140    14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:34:19.140   14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:34:19.140   14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:34:19.140   14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable
00:34:19.140   14:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=()
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=()
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=()
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=()
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=()
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=()
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=()
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]]
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}")
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}")
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]]
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}")
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)'
00:34:25.703  Found 0000:d9:00.0 (0x15b3 - 0x1015)
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)'
00:34:25.703  Found 0000:d9:00.1 (0x15b3 - 0x1015)
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]]
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0'
00:34:25.703  Found net devices under 0000:d9:00.0: mlx_0_0
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1'
00:34:25.703  Found net devices under 0000:d9:00.1: mlx_0_1
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ rdma == tcp ]]
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@447 -- # [[ rdma == rdma ]]
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # rdma_device_init
00:34:25.703   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@529 -- # load_ib_rdma_modules
00:34:25.704    14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # uname
00:34:25.704   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']'
00:34:25.704   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@66 -- # modprobe ib_cm
00:34:25.704   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@67 -- # modprobe ib_core
00:34:25.704   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@68 -- # modprobe ib_umad
00:34:25.704   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs
00:34:25.704   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@70 -- # modprobe iw_cm
00:34:25.704   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@71 -- # modprobe rdma_cm
00:34:25.704   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm
00:34:25.704   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@530 -- # allocate_nic_ips
00:34:25.704   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR ))
00:34:25.704    14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # get_rdma_if_list
00:34:25.704    14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:34:25.704    14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:34:25.704     14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:34:25.704     14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:34:25.704    14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:34:25.704    14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:34:25.704    14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:34:25.704    14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:34:25.704    14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_0
00:34:25.704    14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2
00:34:25.704    14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:34:25.704    14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:34:25.704    14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:34:25.704    14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:34:25.704    14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:34:25.704    14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_1
00:34:25.704    14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2
00:34:25.704   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:34:25.704    14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0
00:34:25.704    14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:34:25.704    14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:34:25.704    14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}'
00:34:25.704    14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1
00:34:25.704   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # ip=192.168.100.8
00:34:25.704   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]]
00:34:25.704   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0
00:34:25.704  6: mlx_0_0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:34:25.704      link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff
00:34:25.704      altname enp217s0f0np0
00:34:25.704      altname ens818f0np0
00:34:25.704      inet 192.168.100.8/24 scope global mlx_0_0
00:34:25.704         valid_lft forever preferred_lft forever
00:34:25.704   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:34:25.704    14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1
00:34:25.704    14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:34:25.704    14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:34:25.704    14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}'
00:34:25.704    14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1
00:34:25.704   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # ip=192.168.100.9
00:34:25.704   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]]
00:34:25.704   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1
00:34:25.704  7: mlx_0_1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:34:25.704      link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff
00:34:25.704      altname enp217s0f1np1
00:34:25.704      altname ens818f1np1
00:34:25.704      inet 192.168.100.9/24 scope global mlx_0_1
00:34:25.704         valid_lft forever preferred_lft forever
00:34:25.704   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0
00:34:25.704   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:34:25.704   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma'
00:34:25.704   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]]
00:34:25.704    14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@484 -- # get_available_rdma_ips
00:34:25.704     14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # get_rdma_if_list
00:34:25.704     14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:34:25.704     14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:34:25.704      14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:34:25.704      14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:34:25.704     14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:34:25.704     14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:34:25.704     14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:34:25.704     14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:34:25.704     14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_0
00:34:25.704     14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2
00:34:25.704     14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:34:25.704     14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:34:25.704     14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:34:25.704     14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:34:25.704     14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:34:25.704     14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_1
00:34:25.704     14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2
00:34:25.704    14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:34:25.704    14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0
00:34:25.704    14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:34:25.704    14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:34:25.704    14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}'
00:34:25.704    14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1
00:34:25.704    14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:34:25.704    14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1
00:34:25.704    14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:34:25.704    14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:34:25.963    14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}'
00:34:25.963    14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1
00:34:25.963   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8
00:34:25.963  192.168.100.9'
00:34:25.963    14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # echo '192.168.100.8
00:34:25.963  192.168.100.9'
00:34:25.963    14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # head -n 1
00:34:25.963   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8
00:34:25.963    14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # echo '192.168.100.8
00:34:25.963  192.168.100.9'
00:34:25.963    14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # tail -n +2
00:34:25.963    14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # head -n 1
00:34:25.963   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9
00:34:25.963   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']'
00:34:25.963   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024'
00:34:25.963   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' rdma == tcp ']'
00:34:25.963   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' rdma == rdma ']'
00:34:25.963   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-rdma
00:34:25.963   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT
00:34:25.963    14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip
00:34:25.963    14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip
00:34:25.963    14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=()
00:34:25.963    14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates
00:34:25.963    14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:34:25.963    14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:34:25.963    14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:34:25.963    14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:34:25.963    14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:34:25.963    14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:34:25.963    14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:34:25.963   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=192.168.100.8
00:34:25.963   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 192.168.100.8
00:34:25.963   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=192.168.100.8
00:34:25.963   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet
00:34:25.963   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn
00:34:25.963   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1
00:34:25.963   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1
00:34:25.963   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme
00:34:25.963   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]]
00:34:25.963   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet
00:34:25.963   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]]
00:34:25.963   14:00:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset
00:34:29.253  Waiting for block devices as requested
00:34:29.253  0000:00:04.7 (8086 2021): vfio-pci -> ioatdma
00:34:29.253  0000:00:04.6 (8086 2021): vfio-pci -> ioatdma
00:34:29.253  0000:00:04.5 (8086 2021): vfio-pci -> ioatdma
00:34:29.253  0000:00:04.4 (8086 2021): vfio-pci -> ioatdma
00:34:29.253  0000:00:04.3 (8086 2021): vfio-pci -> ioatdma
00:34:29.253  0000:00:04.2 (8086 2021): vfio-pci -> ioatdma
00:34:29.253  0000:00:04.1 (8086 2021): vfio-pci -> ioatdma
00:34:29.253  0000:00:04.0 (8086 2021): vfio-pci -> ioatdma
00:34:29.513  0000:80:04.7 (8086 2021): vfio-pci -> ioatdma
00:34:29.513  0000:80:04.6 (8086 2021): vfio-pci -> ioatdma
00:34:29.513  0000:80:04.5 (8086 2021): vfio-pci -> ioatdma
00:34:29.513  0000:80:04.4 (8086 2021): vfio-pci -> ioatdma
00:34:29.771  0000:80:04.3 (8086 2021): vfio-pci -> ioatdma
00:34:29.771  0000:80:04.2 (8086 2021): vfio-pci -> ioatdma
00:34:29.771  0000:80:04.1 (8086 2021): vfio-pci -> ioatdma
00:34:30.030  0000:80:04.0 (8086 2021): vfio-pci -> ioatdma
00:34:30.030  0000:d8:00.0 (8086 0a54): vfio-pci -> nvme
00:34:30.289   14:00:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme*
00:34:30.289   14:00:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]]
00:34:30.289   14:00:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1
00:34:30.289   14:00:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1
00:34:30.289   14:00:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]]
00:34:30.289   14:00:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:34:30.289   14:00:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1
00:34:30.289   14:00:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt
00:34:30.289   14:00:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1
00:34:30.289  No valid GPT data, bailing
00:34:30.289    14:00:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1
00:34:30.289   14:00:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt=
00:34:30.289   14:00:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1
00:34:30.289   14:00:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1
00:34:30.289   14:00:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]]
00:34:30.289   14:00:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn
00:34:30.289   14:00:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1
00:34:30.289   14:00:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1
00:34:30.289   14:00:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn
00:34:30.289   14:00:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1
00:34:30.289   14:00:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1
00:34:30.289   14:00:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1
00:34:30.289   14:00:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 192.168.100.8
00:34:30.289   14:00:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo rdma
00:34:30.289   14:00:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420
00:34:30.289   14:00:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4
00:34:30.289   14:00:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/
00:34:30.289   14:00:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -t rdma -s 4420
00:34:30.547  
00:34:30.547  Discovery Log Number of Records 2, Generation counter 2
00:34:30.547  =====Discovery Log Entry 0======
00:34:30.547  trtype:  rdma
00:34:30.547  adrfam:  ipv4
00:34:30.547  subtype: current discovery subsystem
00:34:30.547  treq:    not specified, sq flow control disable supported
00:34:30.547  portid:  1
00:34:30.547  trsvcid: 4420
00:34:30.547  subnqn:  nqn.2014-08.org.nvmexpress.discovery
00:34:30.547  traddr:  192.168.100.8
00:34:30.547  eflags:  none
00:34:30.547  rdma_prtype: not specified
00:34:30.547  rdma_qptype: connected
00:34:30.547  rdma_cms:    rdma-cm
00:34:30.547  rdma_pkey: 0x0000
00:34:30.547  =====Discovery Log Entry 1======
00:34:30.547  trtype:  rdma
00:34:30.547  adrfam:  ipv4
00:34:30.547  subtype: nvme subsystem
00:34:30.547  treq:    not specified, sq flow control disable supported
00:34:30.547  portid:  1
00:34:30.547  trsvcid: 4420
00:34:30.547  subnqn:  nqn.2016-06.io.spdk:testnqn
00:34:30.547  traddr:  192.168.100.8
00:34:30.547  eflags:  none
00:34:30.547  rdma_prtype: not specified
00:34:30.547  rdma_qptype: connected
00:34:30.547  rdma_cms:    rdma-cm
00:34:30.547  rdma_pkey: 0x0000
00:34:30.547   14:00:30 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r '	trtype:rdma 	adrfam:IPv4 	traddr:192.168.100.8
00:34:30.547  	trsvcid:4420 	subnqn:nqn.2014-08.org.nvmexpress.discovery'
00:34:30.547  =====================================================
00:34:30.547  NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery
00:34:30.547  =====================================================
00:34:30.547  Controller Capabilities/Features
00:34:30.547  ================================
00:34:30.547  Vendor ID:                             0000
00:34:30.547  Subsystem Vendor ID:                   0000
00:34:30.547  Serial Number:                         31d7da8ec3ff90e97be9
00:34:30.547  Model Number:                          Linux
00:34:30.547  Firmware Version:                      6.8.9-20
00:34:30.547  Recommended Arb Burst:                 0
00:34:30.547  IEEE OUI Identifier:                   00 00 00
00:34:30.547  Multi-path I/O
00:34:30.547    May have multiple subsystem ports:   No
00:34:30.547    May have multiple controllers:       No
00:34:30.547    Associated with SR-IOV VF:           No
00:34:30.547  Max Data Transfer Size:                Unlimited
00:34:30.547  Max Number of Namespaces:              0
00:34:30.547  Max Number of I/O Queues:              1024
00:34:30.547  NVMe Specification Version (VS):       1.3
00:34:30.547  NVMe Specification Version (Identify): 1.3
00:34:30.547  Maximum Queue Entries:                 128
00:34:30.547  Contiguous Queues Required:            No
00:34:30.547  Arbitration Mechanisms Supported
00:34:30.547    Weighted Round Robin:                Not Supported
00:34:30.547    Vendor Specific:                     Not Supported
00:34:30.547  Reset Timeout:                         7500 ms
00:34:30.547  Doorbell Stride:                       4 bytes
00:34:30.547  NVM Subsystem Reset:                   Not Supported
00:34:30.547  Command Sets Supported
00:34:30.547    NVM Command Set:                     Supported
00:34:30.548  Boot Partition:                        Not Supported
00:34:30.548  Memory Page Size Minimum:              4096 bytes
00:34:30.548  Memory Page Size Maximum:              4096 bytes
00:34:30.548  Persistent Memory Region:              Not Supported
00:34:30.548  Optional Asynchronous Events Supported
00:34:30.548    Namespace Attribute Notices:         Not Supported
00:34:30.548    Firmware Activation Notices:         Not Supported
00:34:30.548    ANA Change Notices:                  Not Supported
00:34:30.548    PLE Aggregate Log Change Notices:    Not Supported
00:34:30.548    LBA Status Info Alert Notices:       Not Supported
00:34:30.548    EGE Aggregate Log Change Notices:    Not Supported
00:34:30.548    Normal NVM Subsystem Shutdown event: Not Supported
00:34:30.548    Zone Descriptor Change Notices:      Not Supported
00:34:30.548    Discovery Log Change Notices:        Supported
00:34:30.548  Controller Attributes
00:34:30.548    128-bit Host Identifier:             Not Supported
00:34:30.548    Non-Operational Permissive Mode:     Not Supported
00:34:30.548    NVM Sets:                            Not Supported
00:34:30.548    Read Recovery Levels:                Not Supported
00:34:30.548    Endurance Groups:                    Not Supported
00:34:30.548    Predictable Latency Mode:            Not Supported
00:34:30.548    Traffic Based Keep ALive:            Not Supported
00:34:30.548    Namespace Granularity:               Not Supported
00:34:30.548    SQ Associations:                     Not Supported
00:34:30.548    UUID List:                           Not Supported
00:34:30.548    Multi-Domain Subsystem:              Not Supported
00:34:30.548    Fixed Capacity Management:           Not Supported
00:34:30.548    Variable Capacity Management:        Not Supported
00:34:30.548    Delete Endurance Group:              Not Supported
00:34:30.548    Delete NVM Set:                      Not Supported
00:34:30.548    Extended LBA Formats Supported:      Not Supported
00:34:30.548    Flexible Data Placement Supported:   Not Supported
00:34:30.548  
00:34:30.548  Controller Memory Buffer Support
00:34:30.548  ================================
00:34:30.548  Supported:                             No
00:34:30.548  
00:34:30.548  Persistent Memory Region Support
00:34:30.548  ================================
00:34:30.548  Supported:                             No
00:34:30.548  
00:34:30.548  Admin Command Set Attributes
00:34:30.548  ============================
00:34:30.548  Security Send/Receive:                 Not Supported
00:34:30.548  Format NVM:                            Not Supported
00:34:30.548  Firmware Activate/Download:            Not Supported
00:34:30.548  Namespace Management:                  Not Supported
00:34:30.548  Device Self-Test:                      Not Supported
00:34:30.548  Directives:                            Not Supported
00:34:30.548  NVMe-MI:                               Not Supported
00:34:30.548  Virtualization Management:             Not Supported
00:34:30.548  Doorbell Buffer Config:                Not Supported
00:34:30.548  Get LBA Status Capability:             Not Supported
00:34:30.548  Command & Feature Lockdown Capability: Not Supported
00:34:30.548  Abort Command Limit:                   1
00:34:30.548  Async Event Request Limit:             1
00:34:30.548  Number of Firmware Slots:              N/A
00:34:30.548  Firmware Slot 1 Read-Only:             N/A
00:34:30.548  Firmware Activation Without Reset:     N/A
00:34:30.548  Multiple Update Detection Support:     N/A
00:34:30.548  Firmware Update Granularity:           No Information Provided
00:34:30.548  Per-Namespace SMART Log:               No
00:34:30.548  Asymmetric Namespace Access Log Page:  Not Supported
00:34:30.548  Subsystem NQN:                         nqn.2014-08.org.nvmexpress.discovery
00:34:30.548  Command Effects Log Page:              Not Supported
00:34:30.548  Get Log Page Extended Data:            Supported
00:34:30.548  Telemetry Log Pages:                   Not Supported
00:34:30.548  Persistent Event Log Pages:            Not Supported
00:34:30.548  Supported Log Pages Log Page:          May Support
00:34:30.548  Commands Supported & Effects Log Page: Not Supported
00:34:30.548  Feature Identifiers & Effects Log Page:May Support
00:34:30.548  NVMe-MI Commands & Effects Log Page:   May Support
00:34:30.548  Data Area 4 for Telemetry Log:         Not Supported
00:34:30.548  Error Log Page Entries Supported:      1
00:34:30.548  Keep Alive:                            Not Supported
00:34:30.548  
00:34:30.548  NVM Command Set Attributes
00:34:30.548  ==========================
00:34:30.548  Submission Queue Entry Size
00:34:30.548    Max:                       1
00:34:30.548    Min:                       1
00:34:30.548  Completion Queue Entry Size
00:34:30.548    Max:                       1
00:34:30.548    Min:                       1
00:34:30.548  Number of Namespaces:        0
00:34:30.548  Compare Command:             Not Supported
00:34:30.548  Write Uncorrectable Command: Not Supported
00:34:30.548  Dataset Management Command:  Not Supported
00:34:30.548  Write Zeroes Command:        Not Supported
00:34:30.548  Set Features Save Field:     Not Supported
00:34:30.548  Reservations:                Not Supported
00:34:30.548  Timestamp:                   Not Supported
00:34:30.548  Copy:                        Not Supported
00:34:30.548  Volatile Write Cache:        Not Present
00:34:30.548  Atomic Write Unit (Normal):  1
00:34:30.548  Atomic Write Unit (PFail):   1
00:34:30.548  Atomic Compare & Write Unit: 1
00:34:30.548  Fused Compare & Write:       Not Supported
00:34:30.548  Scatter-Gather List
00:34:30.548    SGL Command Set:           Supported
00:34:30.548    SGL Keyed:                 Supported
00:34:30.548    SGL Bit Bucket Descriptor: Not Supported
00:34:30.548    SGL Metadata Pointer:      Not Supported
00:34:30.548    Oversized SGL:             Not Supported
00:34:30.548    SGL Metadata Address:      Not Supported
00:34:30.548    SGL Offset:                Supported
00:34:30.548    Transport SGL Data Block:  Not Supported
00:34:30.548  Replay Protected Memory Block:  Not Supported
00:34:30.548  
00:34:30.548  Firmware Slot Information
00:34:30.548  =========================
00:34:30.548  Active slot:                 0
00:34:30.548  
00:34:30.548  
00:34:30.548  Error Log
00:34:30.548  =========
00:34:30.548  
00:34:30.548  Active Namespaces
00:34:30.548  =================
00:34:30.548  Discovery Log Page
00:34:30.548  ==================
00:34:30.548  Generation Counter:                    2
00:34:30.548  Number of Records:                     2
00:34:30.548  Record Format:                         0
00:34:30.548  
00:34:30.548  Discovery Log Entry 0
00:34:30.548  ----------------------
00:34:30.548  Transport Type:                        1 (RDMA)
00:34:30.548  Address Family:                        1 (IPv4)
00:34:30.548  Subsystem Type:                        3 (Current Discovery Subsystem)
00:34:30.548  Entry Flags:
00:34:30.548    Duplicate Returned Information:			0
00:34:30.548    Explicit Persistent Connection Support for Discovery: 0
00:34:30.548  Transport Requirements:
00:34:30.548    Secure Channel:                      Not Specified
00:34:30.548  Port ID:                               1 (0x0001)
00:34:30.548  Controller ID:                         65535 (0xffff)
00:34:30.548  Admin Max SQ Size:                     32
00:34:30.548  Transport Service Identifier:          4420
00:34:30.548  NVM Subsystem Qualified Name:          nqn.2014-08.org.nvmexpress.discovery
00:34:30.548  Transport Address:                     192.168.100.8
00:34:30.548  Transport Specific Address Subtype - RDMA
00:34:30.548    RDMA QP Service Type:                1 (Reliable Connected)
00:34:30.548    RDMA Provider Type:                  1 (No provider specified)
00:34:30.548    RDMA CM Service:                     1 (RDMA_CM)
00:34:30.548  Discovery Log Entry 1
00:34:30.548  ----------------------
00:34:30.548  Transport Type:                        1 (RDMA)
00:34:30.548  Address Family:                        1 (IPv4)
00:34:30.548  Subsystem Type:                        2 (NVM Subsystem)
00:34:30.548  Entry Flags:
00:34:30.548    Duplicate Returned Information:			0
00:34:30.548    Explicit Persistent Connection Support for Discovery: 0
00:34:30.548  Transport Requirements:
00:34:30.548    Secure Channel:                      Not Specified
00:34:30.548  Port ID:                               1 (0x0001)
00:34:30.548  Controller ID:                         65535 (0xffff)
00:34:30.548  Admin Max SQ Size:                     32
00:34:30.548  Transport Service Identifier:          4420
00:34:30.548  NVM Subsystem Qualified Name:          nqn.2016-06.io.spdk:testnqn
00:34:30.548  Transport Address:                     192.168.100.8
00:34:30.548  Transport Specific Address Subtype - RDMA
00:34:30.548    RDMA QP Service Type:                1 (Reliable Connected)
00:34:30.807    RDMA Provider Type:                  1 (No provider specified)
00:34:30.807    RDMA CM Service:                     1 (RDMA_CM)
00:34:30.807   14:00:30 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r '	trtype:rdma 	adrfam:IPv4 	traddr:192.168.100.8 	trsvcid:4420 	subnqn:nqn.2016-06.io.spdk:testnqn'
00:34:30.807  get_feature(0x01) failed
00:34:30.807  get_feature(0x02) failed
00:34:30.807  get_feature(0x04) failed
00:34:30.807  =====================================================
00:34:30.807  NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:testnqn
00:34:30.808  =====================================================
00:34:30.808  Controller Capabilities/Features
00:34:30.808  ================================
00:34:30.808  Vendor ID:                             0000
00:34:30.808  Subsystem Vendor ID:                   0000
00:34:30.808  Serial Number:                         220935c3ef90e4af9892
00:34:30.808  Model Number:                          SPDK-nqn.2016-06.io.spdk:testnqn
00:34:30.808  Firmware Version:                      6.8.9-20
00:34:30.808  Recommended Arb Burst:                 6
00:34:30.808  IEEE OUI Identifier:                   00 00 00
00:34:30.808  Multi-path I/O
00:34:30.808    May have multiple subsystem ports:   Yes
00:34:30.808    May have multiple controllers:       Yes
00:34:30.808    Associated with SR-IOV VF:           No
00:34:30.808  Max Data Transfer Size:                1048576
00:34:30.808  Max Number of Namespaces:              1024
00:34:30.808  Max Number of I/O Queues:              128
00:34:30.808  NVMe Specification Version (VS):       1.3
00:34:30.808  NVMe Specification Version (Identify): 1.3
00:34:30.808  Maximum Queue Entries:                 128
00:34:30.808  Contiguous Queues Required:            No
00:34:30.808  Arbitration Mechanisms Supported
00:34:30.808    Weighted Round Robin:                Not Supported
00:34:30.808    Vendor Specific:                     Not Supported
00:34:30.808  Reset Timeout:                         7500 ms
00:34:30.808  Doorbell Stride:                       4 bytes
00:34:30.808  NVM Subsystem Reset:                   Not Supported
00:34:30.808  Command Sets Supported
00:34:30.808    NVM Command Set:                     Supported
00:34:30.808  Boot Partition:                        Not Supported
00:34:30.808  Memory Page Size Minimum:              4096 bytes
00:34:30.808  Memory Page Size Maximum:              4096 bytes
00:34:30.808  Persistent Memory Region:              Not Supported
00:34:30.808  Optional Asynchronous Events Supported
00:34:30.808    Namespace Attribute Notices:         Supported
00:34:30.808    Firmware Activation Notices:         Not Supported
00:34:30.808    ANA Change Notices:                  Supported
00:34:30.808    PLE Aggregate Log Change Notices:    Not Supported
00:34:30.808    LBA Status Info Alert Notices:       Not Supported
00:34:30.808    EGE Aggregate Log Change Notices:    Not Supported
00:34:30.808    Normal NVM Subsystem Shutdown event: Not Supported
00:34:30.808    Zone Descriptor Change Notices:      Not Supported
00:34:30.808    Discovery Log Change Notices:        Not Supported
00:34:30.808  Controller Attributes
00:34:30.808    128-bit Host Identifier:             Supported
00:34:30.808    Non-Operational Permissive Mode:     Not Supported
00:34:30.808    NVM Sets:                            Not Supported
00:34:30.808    Read Recovery Levels:                Not Supported
00:34:30.808    Endurance Groups:                    Not Supported
00:34:30.808    Predictable Latency Mode:            Not Supported
00:34:30.808    Traffic Based Keep ALive:            Supported
00:34:30.808    Namespace Granularity:               Not Supported
00:34:30.808    SQ Associations:                     Not Supported
00:34:30.808    UUID List:                           Not Supported
00:34:30.808    Multi-Domain Subsystem:              Not Supported
00:34:30.808    Fixed Capacity Management:           Not Supported
00:34:30.808    Variable Capacity Management:        Not Supported
00:34:30.808    Delete Endurance Group:              Not Supported
00:34:30.808    Delete NVM Set:                      Not Supported
00:34:30.808    Extended LBA Formats Supported:      Not Supported
00:34:30.808    Flexible Data Placement Supported:   Not Supported
00:34:30.808  
00:34:30.808  Controller Memory Buffer Support
00:34:30.808  ================================
00:34:30.808  Supported:                             No
00:34:30.808  
00:34:30.808  Persistent Memory Region Support
00:34:30.808  ================================
00:34:30.808  Supported:                             No
00:34:30.808  
00:34:30.808  Admin Command Set Attributes
00:34:30.808  ============================
00:34:30.808  Security Send/Receive:                 Not Supported
00:34:30.808  Format NVM:                            Not Supported
00:34:30.808  Firmware Activate/Download:            Not Supported
00:34:30.808  Namespace Management:                  Not Supported
00:34:30.808  Device Self-Test:                      Not Supported
00:34:30.808  Directives:                            Not Supported
00:34:30.808  NVMe-MI:                               Not Supported
00:34:30.808  Virtualization Management:             Not Supported
00:34:30.808  Doorbell Buffer Config:                Not Supported
00:34:30.808  Get LBA Status Capability:             Not Supported
00:34:30.808  Command & Feature Lockdown Capability: Not Supported
00:34:30.808  Abort Command Limit:                   4
00:34:30.808  Async Event Request Limit:             4
00:34:30.808  Number of Firmware Slots:              N/A
00:34:30.808  Firmware Slot 1 Read-Only:             N/A
00:34:30.808  Firmware Activation Without Reset:     N/A
00:34:30.808  Multiple Update Detection Support:     N/A
00:34:30.808  Firmware Update Granularity:           No Information Provided
00:34:30.808  Per-Namespace SMART Log:               Yes
00:34:30.808  Asymmetric Namespace Access Log Page:  Supported
00:34:30.808  ANA Transition Time                 :  10 sec
00:34:30.808  
00:34:30.808  Asymmetric Namespace Access Capabilities
00:34:30.808    ANA Optimized State               : Supported
00:34:30.808    ANA Non-Optimized State           : Supported
00:34:30.808    ANA Inaccessible State            : Supported
00:34:30.808    ANA Persistent Loss State         : Supported
00:34:30.808    ANA Change State                  : Supported
00:34:30.808    ANAGRPID is not changed           : No
00:34:30.808    Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported
00:34:30.808  
00:34:30.808  ANA Group Identifier Maximum        : 128
00:34:30.808  Number of ANA Group Identifiers     : 128
00:34:30.808  Max Number of Allowed Namespaces    : 1024
00:34:30.808  Subsystem NQN:                         nqn.2016-06.io.spdk:testnqn
00:34:30.808  Command Effects Log Page:              Supported
00:34:30.808  Get Log Page Extended Data:            Supported
00:34:30.808  Telemetry Log Pages:                   Not Supported
00:34:30.808  Persistent Event Log Pages:            Not Supported
00:34:30.808  Supported Log Pages Log Page:          May Support
00:34:30.808  Commands Supported & Effects Log Page: Not Supported
00:34:30.808  Feature Identifiers & Effects Log Page:May Support
00:34:30.808  NVMe-MI Commands & Effects Log Page:   May Support
00:34:30.808  Data Area 4 for Telemetry Log:         Not Supported
00:34:30.808  Error Log Page Entries Supported:      128
00:34:30.808  Keep Alive:                            Supported
00:34:30.808  Keep Alive Granularity:                1000 ms
00:34:30.808  
00:34:30.808  NVM Command Set Attributes
00:34:30.808  ==========================
00:34:30.808  Submission Queue Entry Size
00:34:30.808    Max:                       64
00:34:30.808    Min:                       64
00:34:30.808  Completion Queue Entry Size
00:34:30.808    Max:                       16
00:34:30.808    Min:                       16
00:34:30.808  Number of Namespaces:        1024
00:34:30.808  Compare Command:             Not Supported
00:34:30.808  Write Uncorrectable Command: Not Supported
00:34:30.808  Dataset Management Command:  Supported
00:34:30.808  Write Zeroes Command:        Supported
00:34:30.808  Set Features Save Field:     Not Supported
00:34:30.808  Reservations:                Not Supported
00:34:30.808  Timestamp:                   Not Supported
00:34:30.808  Copy:                        Not Supported
00:34:30.808  Volatile Write Cache:        Present
00:34:30.808  Atomic Write Unit (Normal):  1
00:34:30.808  Atomic Write Unit (PFail):   1
00:34:30.808  Atomic Compare & Write Unit: 1
00:34:30.808  Fused Compare & Write:       Not Supported
00:34:30.808  Scatter-Gather List
00:34:30.808    SGL Command Set:           Supported
00:34:30.808    SGL Keyed:                 Supported
00:34:30.808    SGL Bit Bucket Descriptor: Not Supported
00:34:30.808    SGL Metadata Pointer:      Not Supported
00:34:30.808    Oversized SGL:             Not Supported
00:34:30.808    SGL Metadata Address:      Not Supported
00:34:30.808    SGL Offset:                Supported
00:34:30.808    Transport SGL Data Block:  Not Supported
00:34:30.808  Replay Protected Memory Block:  Not Supported
00:34:30.808  
00:34:30.808  Firmware Slot Information
00:34:30.808  =========================
00:34:30.808  Active slot:                 0
00:34:30.808  
00:34:30.808  Asymmetric Namespace Access
00:34:30.808  ===========================
00:34:30.808  Change Count                    : 0
00:34:30.808  Number of ANA Group Descriptors : 1
00:34:30.808  ANA Group Descriptor            : 0
00:34:30.808    ANA Group ID                  : 1
00:34:30.808    Number of NSID Values         : 1
00:34:30.808    Change Count                  : 0
00:34:30.808    ANA State                     : 1
00:34:30.808    Namespace Identifier          : 1
00:34:30.808  
00:34:30.808  Commands Supported and Effects
00:34:30.808  ==============================
00:34:30.808  Admin Commands
00:34:30.808  --------------
00:34:30.808                    Get Log Page (02h): Supported 
00:34:30.808                        Identify (06h): Supported 
00:34:30.808                           Abort (08h): Supported 
00:34:30.808                    Set Features (09h): Supported 
00:34:30.808                    Get Features (0Ah): Supported 
00:34:30.808      Asynchronous Event Request (0Ch): Supported 
00:34:30.808                      Keep Alive (18h): Supported 
00:34:30.808  I/O Commands
00:34:30.808  ------------
00:34:30.808                           Flush (00h): Supported 
00:34:30.808                           Write (01h): Supported LBA-Change 
00:34:30.808                            Read (02h): Supported 
00:34:30.808                    Write Zeroes (08h): Supported LBA-Change 
00:34:30.808              Dataset Management (09h): Supported 
00:34:30.808  
00:34:30.808  Error Log
00:34:30.808  =========
00:34:30.808  Entry: 0
00:34:30.808  Error Count:            0x3
00:34:30.808  Submission Queue Id:    0x0
00:34:30.808  Command Id:             0x5
00:34:30.808  Phase Bit:              0
00:34:30.808  Status Code:            0x2
00:34:30.808  Status Code Type:       0x0
00:34:30.808  Do Not Retry:           1
00:34:30.808  Error Location:         0x28
00:34:30.808  LBA:                    0x0
00:34:30.808  Namespace:              0x0
00:34:30.808  Vendor Log Page:        0x0
00:34:30.808  -----------
00:34:30.808  Entry: 1
00:34:30.808  Error Count:            0x2
00:34:30.808  Submission Queue Id:    0x0
00:34:30.808  Command Id:             0x5
00:34:30.808  Phase Bit:              0
00:34:30.808  Status Code:            0x2
00:34:30.809  Status Code Type:       0x0
00:34:30.809  Do Not Retry:           1
00:34:30.809  Error Location:         0x28
00:34:30.809  LBA:                    0x0
00:34:30.809  Namespace:              0x0
00:34:30.809  Vendor Log Page:        0x0
00:34:30.809  -----------
00:34:30.809  Entry: 2
00:34:30.809  Error Count:            0x1
00:34:30.809  Submission Queue Id:    0x0
00:34:30.809  Command Id:             0x0
00:34:30.809  Phase Bit:              0
00:34:30.809  Status Code:            0x2
00:34:30.809  Status Code Type:       0x0
00:34:30.809  Do Not Retry:           1
00:34:30.809  Error Location:         0x28
00:34:30.809  LBA:                    0x0
00:34:30.809  Namespace:              0x0
00:34:30.809  Vendor Log Page:        0x0
00:34:30.809  
00:34:30.809  Number of Queues
00:34:30.809  ================
00:34:30.809  Number of I/O Submission Queues:      128
00:34:30.809  Number of I/O Completion Queues:      128
00:34:30.809  
00:34:30.809  ZNS Specific Controller Data
00:34:30.809  ============================
00:34:30.809  Zone Append Size Limit:      0
00:34:30.809  
00:34:30.809  
00:34:30.809  Active Namespaces
00:34:30.809  =================
00:34:30.809  get_feature(0x05) failed
00:34:30.809  Namespace ID:1
00:34:30.809  Command Set Identifier:                NVM (00h)
00:34:30.809  Deallocate:                            Supported
00:34:30.809  Deallocated/Unwritten Error:           Not Supported
00:34:30.809  Deallocated Read Value:                Unknown
00:34:30.809  Deallocate in Write Zeroes:            Not Supported
00:34:30.809  Deallocated Guard Field:               0xFFFF
00:34:30.809  Flush:                                 Supported
00:34:30.809  Reservation:                           Not Supported
00:34:30.809  Namespace Sharing Capabilities:        Multiple Controllers
00:34:30.809  Size (in LBAs):                        3907029168 (1863GiB)
00:34:30.809  Capacity (in LBAs):                    3907029168 (1863GiB)
00:34:30.809  Utilization (in LBAs):                 3907029168 (1863GiB)
00:34:30.809  UUID:                                  f38cc081-c6f9-467e-b813-55597444eba9
00:34:30.809  Thin Provisioning:                     Not Supported
00:34:30.809  Per-NS Atomic Units:                   Yes
00:34:30.809    Atomic Boundary Size (Normal):       0
00:34:30.809    Atomic Boundary Size (PFail):        0
00:34:30.809    Atomic Boundary Offset:              0
00:34:30.809  NGUID/EUI64 Never Reused:              No
00:34:30.809  ANA group ID:                          1
00:34:30.809  Namespace Write Protected:             No
00:34:30.809  Number of LBA Formats:                 1
00:34:30.809  Current LBA Format:                    LBA Format #00
00:34:30.809  LBA Format #00: Data Size:   512  Metadata Size:     0
00:34:30.809  
00:34:30.809   14:00:30 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini
00:34:30.809   14:00:30 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup
00:34:30.809   14:00:30 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync
00:34:30.809   14:00:30 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']'
00:34:30.809   14:00:30 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']'
00:34:30.809   14:00:30 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e
00:34:30.809   14:00:30 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20}
00:34:30.809   14:00:30 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma
00:34:30.809  rmmod nvme_rdma
00:34:31.068  rmmod nvme_fabrics
00:34:31.068   14:00:30 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:34:31.068   14:00:30 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e
00:34:31.068   14:00:30 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0
00:34:31.068   14:00:30 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']'
00:34:31.068   14:00:30 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:34:31.068   14:00:30 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]]
00:34:31.068   14:00:30 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target
00:34:31.068   14:00:30 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]]
00:34:31.068   14:00:30 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0
00:34:31.068   14:00:30 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn
00:34:31.068   14:00:30 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1
00:34:31.068   14:00:30 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1
00:34:31.068   14:00:30 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn
00:34:31.068   14:00:30 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*)
00:34:31.068   14:00:30 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_rdma nvmet
00:34:31.068   14:00:30 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh
00:34:34.355  0000:00:04.7 (8086 2021): ioatdma -> vfio-pci
00:34:34.355  0000:00:04.6 (8086 2021): ioatdma -> vfio-pci
00:34:34.355  0000:00:04.5 (8086 2021): ioatdma -> vfio-pci
00:34:34.355  0000:00:04.4 (8086 2021): ioatdma -> vfio-pci
00:34:34.355  0000:00:04.3 (8086 2021): ioatdma -> vfio-pci
00:34:34.355  0000:00:04.2 (8086 2021): ioatdma -> vfio-pci
00:34:34.355  0000:00:04.1 (8086 2021): ioatdma -> vfio-pci
00:34:34.355  0000:00:04.0 (8086 2021): ioatdma -> vfio-pci
00:34:34.355  0000:80:04.7 (8086 2021): ioatdma -> vfio-pci
00:34:34.355  0000:80:04.6 (8086 2021): ioatdma -> vfio-pci
00:34:34.355  0000:80:04.5 (8086 2021): ioatdma -> vfio-pci
00:34:34.355  0000:80:04.4 (8086 2021): ioatdma -> vfio-pci
00:34:34.355  0000:80:04.3 (8086 2021): ioatdma -> vfio-pci
00:34:34.355  0000:80:04.2 (8086 2021): ioatdma -> vfio-pci
00:34:34.355  0000:80:04.1 (8086 2021): ioatdma -> vfio-pci
00:34:34.355  0000:80:04.0 (8086 2021): ioatdma -> vfio-pci
00:34:36.259  0000:d8:00.0 (8086 0a54): nvme -> vfio-pci
00:34:36.259  
00:34:36.259  real	0m17.385s
00:34:36.259  user	0m4.635s
00:34:36.259  sys	0m10.129s
00:34:36.259   14:00:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable
00:34:36.259   14:00:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x
00:34:36.259  ************************************
00:34:36.259  END TEST nvmf_identify_kernel_target
00:34:36.259  ************************************
00:34:36.259   14:00:35 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma
00:34:36.259   14:00:35 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:34:36.259   14:00:35 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable
00:34:36.259   14:00:35 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:34:36.259  ************************************
00:34:36.259  START TEST nvmf_auth_host
00:34:36.259  ************************************
00:34:36.259   14:00:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma
00:34:36.534  * Looking for test storage...
00:34:36.534  * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host
00:34:36.534    14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:34:36.534     14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version
00:34:36.534     14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:34:36.534    14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:34:36.534    14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:34:36.534    14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l
00:34:36.534    14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l
00:34:36.534    14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-:
00:34:36.534    14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1
00:34:36.534    14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-:
00:34:36.534    14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2
00:34:36.534    14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<'
00:34:36.534    14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2
00:34:36.534    14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1
00:34:36.534    14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:34:36.534    14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in
00:34:36.534    14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1
00:34:36.534    14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 ))
00:34:36.534    14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:34:36.534     14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1
00:34:36.534     14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1
00:34:36.534     14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:34:36.534     14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1
00:34:36.535    14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1
00:34:36.535     14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2
00:34:36.535     14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2
00:34:36.535     14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:34:36.535     14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2
00:34:36.535    14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2
00:34:36.535    14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:34:36.535    14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:34:36.535    14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0
00:34:36.535    14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:34:36.535    14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:34:36.535  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:34:36.535  		--rc genhtml_branch_coverage=1
00:34:36.535  		--rc genhtml_function_coverage=1
00:34:36.535  		--rc genhtml_legend=1
00:34:36.535  		--rc geninfo_all_blocks=1
00:34:36.535  		--rc geninfo_unexecuted_blocks=1
00:34:36.535  		
00:34:36.535  		'
00:34:36.535    14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:34:36.535  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:34:36.535  		--rc genhtml_branch_coverage=1
00:34:36.535  		--rc genhtml_function_coverage=1
00:34:36.535  		--rc genhtml_legend=1
00:34:36.535  		--rc geninfo_all_blocks=1
00:34:36.535  		--rc geninfo_unexecuted_blocks=1
00:34:36.535  		
00:34:36.535  		'
00:34:36.535    14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:34:36.535  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:34:36.535  		--rc genhtml_branch_coverage=1
00:34:36.535  		--rc genhtml_function_coverage=1
00:34:36.535  		--rc genhtml_legend=1
00:34:36.535  		--rc geninfo_all_blocks=1
00:34:36.535  		--rc geninfo_unexecuted_blocks=1
00:34:36.535  		
00:34:36.535  		'
00:34:36.535    14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:34:36.535  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:34:36.535  		--rc genhtml_branch_coverage=1
00:34:36.535  		--rc genhtml_function_coverage=1
00:34:36.535  		--rc genhtml_legend=1
00:34:36.535  		--rc geninfo_all_blocks=1
00:34:36.535  		--rc geninfo_unexecuted_blocks=1
00:34:36.535  		
00:34:36.535  		'
00:34:36.535   14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh
00:34:36.535     14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s
00:34:36.535    14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:34:36.535    14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:34:36.535    14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:34:36.535    14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:34:36.535    14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:34:36.535    14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:34:36.535    14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:34:36.535    14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:34:36.535    14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:34:36.535     14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:34:36.535    14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:34:36.535    14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e
00:34:36.535    14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:34:36.535    14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:34:36.535    14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:34:36.535    14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:34:36.535    14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh
00:34:36.535     14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob
00:34:36.535     14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:34:36.535     14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:34:36.535     14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:34:36.535      14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:34:36.535      14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:34:36.535      14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:34:36.535      14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH
00:34:36.536      14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:34:36.536    14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0
00:34:36.536    14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:34:36.536    14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:34:36.536    14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:34:36.536    14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:34:36.536    14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:34:36.536    14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:34:36.536  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:34:36.536    14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:34:36.536    14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:34:36.536    14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0
00:34:36.536   14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512")
00:34:36.536   14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192")
00:34:36.536   14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0
00:34:36.536   14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0
00:34:36.536   14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0
00:34:36.536   14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0
00:34:36.536   14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=()
00:34:36.536   14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=()
00:34:36.536   14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit
00:34:36.536   14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z rdma ']'
00:34:36.536   14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:34:36.536   14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs
00:34:36.536   14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no
00:34:36.536   14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns
00:34:36.536   14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:34:36.536   14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:34:36.536    14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:34:36.536   14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:34:36.536   14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:34:36.536   14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable
00:34:36.536   14:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:44.749   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:34:44.749   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=()
00:34:44.749   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs
00:34:44.749   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=()
00:34:44.749   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:34:44.749   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=()
00:34:44.749   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers
00:34:44.749   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=()
00:34:44.749   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs
00:34:44.749   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=()
00:34:44.749   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810
00:34:44.749   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=()
00:34:44.749   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722
00:34:44.749   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=()
00:34:44.749   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx
00:34:44.749   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:34:44.749   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:34:44.749   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:34:44.749   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:34:44.749   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:34:44.749   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:34:44.749   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:34:44.749   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:34:44.749   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:34:44.749   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:34:44.749   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:34:44.749   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:34:44.749   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:34:44.749   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ rdma == rdma ]]
00:34:44.749   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}")
00:34:44.749   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}")
00:34:44.749   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]]
00:34:44.749   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}")
00:34:44.749   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:34:44.749   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:34:44.749   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)'
00:34:44.749  Found 0000:d9:00.0 (0x15b3 - 0x1015)
00:34:44.749   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:34:44.749   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:34:44.749   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:34:44.749   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:34:44.749   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:34:44.749   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:34:44.749   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:34:44.749   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)'
00:34:44.749  Found 0000:d9:00.1 (0x15b3 - 0x1015)
00:34:44.749   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:34:44.749   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:34:44.749   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:34:44.749   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:34:44.749   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:34:44.749   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:34:44.749   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:34:44.749   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]]
00:34:44.749   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:34:44.749   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:34:44.749   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:34:44.749   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:34:44.749   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:34:44.749   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0'
00:34:44.749  Found net devices under 0000:d9:00.0: mlx_0_0
00:34:44.749   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:34:44.749   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:34:44.749   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:34:44.749   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:34:44.750   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:34:44.750   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:34:44.750   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1'
00:34:44.750  Found net devices under 0000:d9:00.1: mlx_0_1
00:34:44.750   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:34:44.750   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:34:44.750   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes
00:34:44.750   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:34:44.750   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ rdma == tcp ]]
00:34:44.750   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@447 -- # [[ rdma == rdma ]]
00:34:44.750   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # rdma_device_init
00:34:44.750   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # load_ib_rdma_modules
00:34:44.750    14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@62 -- # uname
00:34:44.750   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']'
00:34:44.750   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@66 -- # modprobe ib_cm
00:34:44.750   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@67 -- # modprobe ib_core
00:34:44.750   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@68 -- # modprobe ib_umad
00:34:44.750   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@69 -- # modprobe ib_uverbs
00:34:44.750   14:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@70 -- # modprobe iw_cm
00:34:44.750   14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@71 -- # modprobe rdma_cm
00:34:44.750   14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@72 -- # modprobe rdma_ucm
00:34:44.750   14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # allocate_nic_ips
00:34:44.750   14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR ))
00:34:44.750    14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # get_rdma_if_list
00:34:44.750    14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:34:44.750    14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:34:44.750     14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:34:44.750     14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:34:44.750    14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:34:44.750    14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:34:44.750    14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:34:44.750    14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:34:44.750    14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_0
00:34:44.750    14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2
00:34:44.750    14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:34:44.750    14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:34:44.750    14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:34:44.750    14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:34:44.750    14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:34:44.750    14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_1
00:34:44.750    14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2
00:34:44.750   14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:34:44.750    14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0
00:34:44.750    14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:34:44.750    14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:34:44.750    14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}'
00:34:44.750    14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1
00:34:44.750   14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # ip=192.168.100.8
00:34:44.750   14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]]
00:34:44.750   14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_0
00:34:44.750  6: mlx_0_0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:34:44.750      link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff
00:34:44.750      altname enp217s0f0np0
00:34:44.750      altname ens818f0np0
00:34:44.750      inet 192.168.100.8/24 scope global mlx_0_0
00:34:44.750         valid_lft forever preferred_lft forever
00:34:44.750   14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:34:44.750    14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1
00:34:44.750    14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:34:44.750    14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:34:44.750    14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1
00:34:44.750    14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}'
00:34:44.750   14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # ip=192.168.100.9
00:34:44.750   14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]]
00:34:44.750   14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_1
00:34:44.750  7: mlx_0_1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:34:44.750      link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff
00:34:44.750      altname enp217s0f1np1
00:34:44.750      altname ens818f1np1
00:34:44.750      inet 192.168.100.9/24 scope global mlx_0_1
00:34:44.750         valid_lft forever preferred_lft forever
00:34:44.750   14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0
00:34:44.750   14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:34:44.750   14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma'
00:34:44.750   14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]]
00:34:44.750    14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # get_available_rdma_ips
00:34:44.750     14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # get_rdma_if_list
00:34:44.750     14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:34:44.750     14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:34:44.750      14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:34:44.750      14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:34:44.750     14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:34:44.750     14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:34:44.750     14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:34:44.750     14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:34:44.750     14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_0
00:34:44.750     14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2
00:34:44.750     14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:34:44.750     14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:34:44.750     14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:34:44.750     14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:34:44.750     14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:34:44.750     14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_1
00:34:44.750     14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2
00:34:44.750    14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:34:44.750    14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0
00:34:44.750    14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:34:44.750    14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:34:44.750    14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1
00:34:44.750    14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}'
00:34:44.750    14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:34:44.750    14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1
00:34:44.750    14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:34:44.750    14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:34:44.750    14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1
00:34:44.750    14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}'
00:34:44.750   14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8
00:34:44.750  192.168.100.9'
00:34:44.750    14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # echo '192.168.100.8
00:34:44.750  192.168.100.9'
00:34:44.750    14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # head -n 1
00:34:44.750   14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8
00:34:44.750    14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # echo '192.168.100.8
00:34:44.750  192.168.100.9'
00:34:44.750    14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # head -n 1
00:34:44.750    14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # tail -n +2
00:34:44.750   14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9
00:34:44.750   14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']'
00:34:44.750   14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024'
00:34:44.750   14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' rdma == tcp ']'
00:34:44.750   14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' rdma == rdma ']'
00:34:44.750   14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-rdma
00:34:44.750   14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth
00:34:44.750   14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:34:44.750   14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable
00:34:44.750   14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:44.750   14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=3523773
00:34:44.751   14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth
00:34:44.751   14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 3523773
00:34:44.751   14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3523773 ']'
00:34:44.751   14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:34:44.751   14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100
00:34:44.751   14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:34:44.751   14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable
00:34:44.751   14:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:44.751   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:34:44.751   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0
00:34:44.751   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:34:44.751   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable
00:34:44.751   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:44.751   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:34:44.751   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3')
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32
00:34:44.751     14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f16556f212cef6fb6245ca711a8fb30c
00:34:44.751     14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.nIE
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f16556f212cef6fb6245ca711a8fb30c 0
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f16556f212cef6fb6245ca711a8fb30c 0
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f16556f212cef6fb6245ca711a8fb30c
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python -
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.nIE
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.nIE
00:34:44.751   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.nIE
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3')
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64
00:34:44.751     14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2355c7f97f898554491d50c26d9fce43ad31f6ff73c5215ea432d5318eb8c2b9
00:34:44.751     14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.N16
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2355c7f97f898554491d50c26d9fce43ad31f6ff73c5215ea432d5318eb8c2b9 3
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2355c7f97f898554491d50c26d9fce43ad31f6ff73c5215ea432d5318eb8c2b9 3
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2355c7f97f898554491d50c26d9fce43ad31f6ff73c5215ea432d5318eb8c2b9
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python -
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.N16
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.N16
00:34:44.751   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.N16
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3')
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48
00:34:44.751     14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f2b3f84ead7d845ed211ea66b059bc6a2fdbd2cebaeb7099
00:34:44.751     14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.JRk
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f2b3f84ead7d845ed211ea66b059bc6a2fdbd2cebaeb7099 0
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f2b3f84ead7d845ed211ea66b059bc6a2fdbd2cebaeb7099 0
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f2b3f84ead7d845ed211ea66b059bc6a2fdbd2cebaeb7099
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python -
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.JRk
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.JRk
00:34:44.751   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.JRk
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3')
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48
00:34:44.751     14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=64eb7bdc7cdbea4f3a5f49b13f64fe89b302aa4098c0cf38
00:34:44.751     14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.H5l
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 64eb7bdc7cdbea4f3a5f49b13f64fe89b302aa4098c0cf38 2
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 64eb7bdc7cdbea4f3a5f49b13f64fe89b302aa4098c0cf38 2
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=64eb7bdc7cdbea4f3a5f49b13f64fe89b302aa4098c0cf38
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python -
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.H5l
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.H5l
00:34:44.751   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.H5l
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3')
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32
00:34:44.751     14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=fa077b11888bff5c6cf3d49572b23167
00:34:44.751     14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.xYv
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key fa077b11888bff5c6cf3d49572b23167 1
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 fa077b11888bff5c6cf3d49572b23167 1
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=fa077b11888bff5c6cf3d49572b23167
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python -
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.xYv
00:34:44.751    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.xYv
00:34:44.752   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.xYv
00:34:44.752    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32
00:34:44.752    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key
00:34:44.752    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3')
00:34:44.752    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests
00:34:44.752    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256
00:34:44.752    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32
00:34:44.752     14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom
00:34:44.752    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7fcc4bbf435cd0d6fd7d1607d81bd0ed
00:34:44.752     14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX
00:34:44.752    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.rA6
00:34:44.752    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7fcc4bbf435cd0d6fd7d1607d81bd0ed 1
00:34:44.752    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7fcc4bbf435cd0d6fd7d1607d81bd0ed 1
00:34:44.752    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest
00:34:44.752    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1
00:34:44.752    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7fcc4bbf435cd0d6fd7d1607d81bd0ed
00:34:44.752    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1
00:34:44.752    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python -
00:34:45.010    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.rA6
00:34:45.010    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.rA6
00:34:45.010   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.rA6
00:34:45.010    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48
00:34:45.010    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key
00:34:45.010    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3')
00:34:45.010    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests
00:34:45.010    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384
00:34:45.010    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48
00:34:45.010     14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom
00:34:45.010    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=33115bde1af13cf91ed8176865e4341976f8573ef8eaa791
00:34:45.010     14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX
00:34:45.010    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.aSN
00:34:45.010    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 33115bde1af13cf91ed8176865e4341976f8573ef8eaa791 2
00:34:45.010    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 33115bde1af13cf91ed8176865e4341976f8573ef8eaa791 2
00:34:45.010    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest
00:34:45.010    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1
00:34:45.010    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=33115bde1af13cf91ed8176865e4341976f8573ef8eaa791
00:34:45.010    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2
00:34:45.010    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python -
00:34:45.010    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.aSN
00:34:45.010    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.aSN
00:34:45.010   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.aSN
00:34:45.010    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32
00:34:45.010    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key
00:34:45.010    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3')
00:34:45.010    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests
00:34:45.010    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null
00:34:45.010    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32
00:34:45.010     14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom
00:34:45.010    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2b94f17d89eac7072b05e9bb55ee2b35
00:34:45.010     14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX
00:34:45.010    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.V0N
00:34:45.010    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2b94f17d89eac7072b05e9bb55ee2b35 0
00:34:45.010    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2b94f17d89eac7072b05e9bb55ee2b35 0
00:34:45.010    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest
00:34:45.010    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1
00:34:45.010    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2b94f17d89eac7072b05e9bb55ee2b35
00:34:45.010    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0
00:34:45.010    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python -
00:34:45.010    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.V0N
00:34:45.010    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.V0N
00:34:45.010   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.V0N
00:34:45.011    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64
00:34:45.011    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key
00:34:45.011    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3')
00:34:45.011    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests
00:34:45.011    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512
00:34:45.011    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64
00:34:45.011     14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom
00:34:45.011    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=48629d448a27f8149195adea8d36aa43ad09749af54db90a81a97773455ad175
00:34:45.011     14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX
00:34:45.011    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.N7k
00:34:45.011    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 48629d448a27f8149195adea8d36aa43ad09749af54db90a81a97773455ad175 3
00:34:45.011    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 48629d448a27f8149195adea8d36aa43ad09749af54db90a81a97773455ad175 3
00:34:45.011    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest
00:34:45.011    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1
00:34:45.011    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=48629d448a27f8149195adea8d36aa43ad09749af54db90a81a97773455ad175
00:34:45.011    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3
00:34:45.011    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python -
00:34:45.011    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.N7k
00:34:45.011    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.N7k
00:34:45.011   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.N7k
00:34:45.011   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]=
00:34:45.011   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3523773
00:34:45.011   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3523773 ']'
00:34:45.011   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:34:45.011   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100
00:34:45.011   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:34:45.011  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:34:45.011   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable
00:34:45.011   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:45.270   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:34:45.270   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0
00:34:45.270   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}"
00:34:45.270   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.nIE
00:34:45.270   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:45.270   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:45.270   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:45.270   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.N16 ]]
00:34:45.270   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.N16
00:34:45.270   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:45.270   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:45.270   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:45.270   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}"
00:34:45.270   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.JRk
00:34:45.270   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:45.270   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:45.270   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:45.270   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.H5l ]]
00:34:45.270   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.H5l
00:34:45.270   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:45.270   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:45.270   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:45.270   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}"
00:34:45.270   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.xYv
00:34:45.270   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:45.270   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:45.270   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:45.270   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.rA6 ]]
00:34:45.270   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.rA6
00:34:45.270   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:45.270   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:45.270   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:45.270   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}"
00:34:45.270   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.aSN
00:34:45.270   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:45.270   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:45.270   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:45.270   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.V0N ]]
00:34:45.270   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.V0N
00:34:45.270   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:45.270   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:45.270   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:45.270   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}"
00:34:45.270   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.N7k
00:34:45.270   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:45.270   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:45.270   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:45.270   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]]
00:34:45.270   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init
00:34:45.270    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip
00:34:45.270    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:34:45.270    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:34:45.270    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:34:45.270    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:34:45.270    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:34:45.270    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:34:45.270    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:34:45.270    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:34:45.270    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:34:45.270    14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:34:45.270   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 192.168.100.8
00:34:45.270   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=192.168.100.8
00:34:45.270   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet
00:34:45.270   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0
00:34:45.270   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1
00:34:45.270   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1
00:34:45.270   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme
00:34:45.270   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]]
00:34:45.270   14:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet
00:34:45.270   14:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]]
00:34:45.270   14:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset
00:34:48.560  Waiting for block devices as requested
00:34:48.560  0000:00:04.7 (8086 2021): vfio-pci -> ioatdma
00:34:48.560  0000:00:04.6 (8086 2021): vfio-pci -> ioatdma
00:34:48.560  0000:00:04.5 (8086 2021): vfio-pci -> ioatdma
00:34:48.819  0000:00:04.4 (8086 2021): vfio-pci -> ioatdma
00:34:48.819  0000:00:04.3 (8086 2021): vfio-pci -> ioatdma
00:34:48.819  0000:00:04.2 (8086 2021): vfio-pci -> ioatdma
00:34:49.079  0000:00:04.1 (8086 2021): vfio-pci -> ioatdma
00:34:49.079  0000:00:04.0 (8086 2021): vfio-pci -> ioatdma
00:34:49.079  0000:80:04.7 (8086 2021): vfio-pci -> ioatdma
00:34:49.338  0000:80:04.6 (8086 2021): vfio-pci -> ioatdma
00:34:49.338  0000:80:04.5 (8086 2021): vfio-pci -> ioatdma
00:34:49.338  0000:80:04.4 (8086 2021): vfio-pci -> ioatdma
00:34:49.597  0000:80:04.3 (8086 2021): vfio-pci -> ioatdma
00:34:49.597  0000:80:04.2 (8086 2021): vfio-pci -> ioatdma
00:34:49.597  0000:80:04.1 (8086 2021): vfio-pci -> ioatdma
00:34:49.597  0000:80:04.0 (8086 2021): vfio-pci -> ioatdma
00:34:49.856  0000:d8:00.0 (8086 0a54): vfio-pci -> nvme
00:34:50.424   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme*
00:34:50.424   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]]
00:34:50.424   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1
00:34:50.424   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1
00:34:50.424   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]]
00:34:50.424   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:34:50.424   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1
00:34:50.424   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt
00:34:50.424   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1
00:34:50.683  No valid GPT data, bailing
00:34:50.684    14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1
00:34:50.684   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt=
00:34:50.684   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1
00:34:50.684   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1
00:34:50.684   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]]
00:34:50.684   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0
00:34:50.684   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1
00:34:50.684   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1
00:34:50.684   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0
00:34:50.684   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1
00:34:50.684   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1
00:34:50.684   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1
00:34:50.684   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 192.168.100.8
00:34:50.684   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo rdma
00:34:50.684   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420
00:34:50.684   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4
00:34:50.684   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/
00:34:50.684   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -t rdma -s 4420
00:34:50.684  
00:34:50.684  Discovery Log Number of Records 2, Generation counter 2
00:34:50.684  =====Discovery Log Entry 0======
00:34:50.684  trtype:  rdma
00:34:50.684  adrfam:  ipv4
00:34:50.684  subtype: current discovery subsystem
00:34:50.684  treq:    not specified, sq flow control disable supported
00:34:50.684  portid:  1
00:34:50.684  trsvcid: 4420
00:34:50.684  subnqn:  nqn.2014-08.org.nvmexpress.discovery
00:34:50.684  traddr:  192.168.100.8
00:34:50.684  eflags:  none
00:34:50.684  rdma_prtype: not specified
00:34:50.684  rdma_qptype: connected
00:34:50.684  rdma_cms:    rdma-cm
00:34:50.684  rdma_pkey: 0x0000
00:34:50.684  =====Discovery Log Entry 1======
00:34:50.684  trtype:  rdma
00:34:50.684  adrfam:  ipv4
00:34:50.684  subtype: nvme subsystem
00:34:50.684  treq:    not specified, sq flow control disable supported
00:34:50.684  portid:  1
00:34:50.684  trsvcid: 4420
00:34:50.684  subnqn:  nqn.2024-02.io.spdk:cnode0
00:34:50.684  traddr:  192.168.100.8
00:34:50.684  eflags:  none
00:34:50.684  rdma_prtype: not specified
00:34:50.684  rdma_qptype: connected
00:34:50.684  rdma_cms:    rdma-cm
00:34:50.684  rdma_pkey: 0x0000
00:34:50.684   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0
00:34:50.684   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0
00:34:50.684   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0
00:34:50.684   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1
00:34:50.684   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:34:50.684   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:34:50.684   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:34:50.684   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:34:50.684   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJiM2Y4NGVhZDdkODQ1ZWQyMTFlYTY2YjA1OWJjNmEyZmRiZDJjZWJhZWI3MDk546rTjQ==:
00:34:50.684   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRlYjdiZGM3Y2RiZWE0ZjNhNWY0OWIxM2Y2NGZlODliMzAyYWE0MDk4YzBjZjM4vyDF+g==:
00:34:50.684   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:34:50.684   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:34:50.684   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJiM2Y4NGVhZDdkODQ1ZWQyMTFlYTY2YjA1OWJjNmEyZmRiZDJjZWJhZWI3MDk546rTjQ==:
00:34:50.684   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRlYjdiZGM3Y2RiZWE0ZjNhNWY0OWIxM2Y2NGZlODliMzAyYWE0MDk4YzBjZjM4vyDF+g==: ]]
00:34:50.684   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRlYjdiZGM3Y2RiZWE0ZjNhNWY0OWIxM2Y2NGZlODliMzAyYWE0MDk4YzBjZjM4vyDF+g==:
00:34:50.684    14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=,
00:34:50.684    14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512
00:34:50.684    14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=,
00:34:50.684    14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192
00:34:50.684   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1
00:34:50.684   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:34:50.684   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512
00:34:50.684   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192
00:34:50.684   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1
00:34:50.684   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:34:50.684   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192
00:34:50.684   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:50.684   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:50.684   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:50.684    14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:34:50.684    14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:34:50.684    14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:34:50.684    14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:34:50.684    14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:34:50.684    14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:34:50.684    14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:34:50.684    14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:34:50.684    14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:34:50.684    14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:34:50.684    14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:34:50.684   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:34:50.684   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:50.684   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:50.943  nvme0n1
00:34:50.943   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:50.943    14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:34:50.943    14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:50.943    14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:50.943    14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:34:50.943    14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:50.943   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:34:50.943   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:34:50.943   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:50.943   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:51.202   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:51.202   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}"
00:34:51.202   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}"
00:34:51.202   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:34:51.202   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0
00:34:51.202   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:34:51.202   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:34:51.202   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:34:51.202   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0
00:34:51.202   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjE2NTU2ZjIxMmNlZjZmYjYyNDVjYTcxMWE4ZmIzMGMjU2T/:
00:34:51.202   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjM1NWM3Zjk3Zjg5ODU1NDQ5MWQ1MGMyNmQ5ZmNlNDNhZDMxZjZmZjczYzUyMTVlYTQzMmQ1MzE4ZWI4YzJiOQZiTRM=:
00:34:51.202   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:34:51.202   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:34:51.202   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjE2NTU2ZjIxMmNlZjZmYjYyNDVjYTcxMWE4ZmIzMGMjU2T/:
00:34:51.202   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjM1NWM3Zjk3Zjg5ODU1NDQ5MWQ1MGMyNmQ5ZmNlNDNhZDMxZjZmZjczYzUyMTVlYTQzMmQ1MzE4ZWI4YzJiOQZiTRM=: ]]
00:34:51.202   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjM1NWM3Zjk3Zjg5ODU1NDQ5MWQ1MGMyNmQ5ZmNlNDNhZDMxZjZmZjczYzUyMTVlYTQzMmQ1MzE4ZWI4YzJiOQZiTRM=:
00:34:51.202   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0
00:34:51.202   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:34:51.202   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:34:51.202   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048
00:34:51.202   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0
00:34:51.202   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:34:51.202   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048
00:34:51.202   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:51.202   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:51.202   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:51.202    14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:34:51.202    14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:34:51.202    14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:34:51.202    14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:34:51.202    14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:34:51.202    14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:34:51.202    14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:34:51.202    14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:34:51.203    14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:34:51.203    14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:34:51.203    14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:34:51.203   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:34:51.203   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:51.203   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:51.203  nvme0n1
00:34:51.203   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:51.203    14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:34:51.203    14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:51.203    14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:51.203    14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:34:51.203    14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:51.462   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:34:51.462   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:34:51.462   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:51.462   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:51.462   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:51.462   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:34:51.462   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1
00:34:51.462   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:34:51.462   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:34:51.462   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:34:51.462   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:34:51.462   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJiM2Y4NGVhZDdkODQ1ZWQyMTFlYTY2YjA1OWJjNmEyZmRiZDJjZWJhZWI3MDk546rTjQ==:
00:34:51.462   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRlYjdiZGM3Y2RiZWE0ZjNhNWY0OWIxM2Y2NGZlODliMzAyYWE0MDk4YzBjZjM4vyDF+g==:
00:34:51.462   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:34:51.462   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:34:51.462   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJiM2Y4NGVhZDdkODQ1ZWQyMTFlYTY2YjA1OWJjNmEyZmRiZDJjZWJhZWI3MDk546rTjQ==:
00:34:51.462   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRlYjdiZGM3Y2RiZWE0ZjNhNWY0OWIxM2Y2NGZlODliMzAyYWE0MDk4YzBjZjM4vyDF+g==: ]]
00:34:51.462   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRlYjdiZGM3Y2RiZWE0ZjNhNWY0OWIxM2Y2NGZlODliMzAyYWE0MDk4YzBjZjM4vyDF+g==:
00:34:51.462   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1
00:34:51.462   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:34:51.462   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:34:51.462   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048
00:34:51.462   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1
00:34:51.462   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:34:51.462   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048
00:34:51.462   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:51.462   14:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:51.462   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:51.462    14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:34:51.462    14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:34:51.462    14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:34:51.462    14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:34:51.462    14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:34:51.462    14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:34:51.462    14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:34:51.462    14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:34:51.462    14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:34:51.462    14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:34:51.462    14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:34:51.462   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:34:51.462   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:51.462   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:51.721  nvme0n1
00:34:51.721   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:51.721    14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:34:51.721    14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:51.721    14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:51.721    14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:34:51.721    14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:51.721   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:34:51.721   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:34:51.721   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:51.721   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:51.721   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:51.721   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:34:51.721   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2
00:34:51.721   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:34:51.722   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:34:51.722   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:34:51.722   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:34:51.722   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmEwNzdiMTE4ODhiZmY1YzZjZjNkNDk1NzJiMjMxNje/HjlO:
00:34:51.722   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2ZjYzRiYmY0MzVjZDBkNmZkN2QxNjA3ZDgxYmQwZWQgUIzh:
00:34:51.722   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:34:51.722   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:34:51.722   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmEwNzdiMTE4ODhiZmY1YzZjZjNkNDk1NzJiMjMxNje/HjlO:
00:34:51.722   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2ZjYzRiYmY0MzVjZDBkNmZkN2QxNjA3ZDgxYmQwZWQgUIzh: ]]
00:34:51.722   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2ZjYzRiYmY0MzVjZDBkNmZkN2QxNjA3ZDgxYmQwZWQgUIzh:
00:34:51.722   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2
00:34:51.722   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:34:51.722   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:34:51.722   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048
00:34:51.722   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2
00:34:51.722   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:34:51.722   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048
00:34:51.722   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:51.722   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:51.722   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:51.722    14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:34:51.722    14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:34:51.722    14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:34:51.722    14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:34:51.722    14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:34:51.722    14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:34:51.722    14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:34:51.722    14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:34:51.722    14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:34:51.722    14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:34:51.722    14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:34:51.722   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:34:51.722   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:51.722   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:51.981  nvme0n1
00:34:51.981   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:51.981    14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:34:51.981    14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:34:51.981    14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:51.981    14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:51.981    14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:51.981   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:34:51.981   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:34:51.981   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:51.981   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:51.981   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:51.981   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:34:51.981   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3
00:34:51.981   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:34:51.981   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:34:51.981   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:34:51.981   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3
00:34:51.981   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzMxMTViZGUxYWYxM2NmOTFlZDgxNzY4NjVlNDM0MTk3NmY4NTczZWY4ZWFhNzkx31Mvig==:
00:34:51.981   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmI5NGYxN2Q4OWVhYzcwNzJiMDVlOWJiNTVlZTJiMzUijv8R:
00:34:51.981   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:34:51.981   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:34:51.981   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzMxMTViZGUxYWYxM2NmOTFlZDgxNzY4NjVlNDM0MTk3NmY4NTczZWY4ZWFhNzkx31Mvig==:
00:34:51.981   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmI5NGYxN2Q4OWVhYzcwNzJiMDVlOWJiNTVlZTJiMzUijv8R: ]]
00:34:51.981   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmI5NGYxN2Q4OWVhYzcwNzJiMDVlOWJiNTVlZTJiMzUijv8R:
00:34:51.981   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3
00:34:51.981   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:34:51.981   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:34:51.981   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048
00:34:51.981   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3
00:34:51.981   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:34:51.981   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048
00:34:51.981   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:51.981   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:51.981   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:51.981    14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:34:51.981    14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:34:51.981    14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:34:51.981    14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:34:51.981    14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:34:51.981    14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:34:51.981    14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:34:51.981    14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:34:51.981    14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:34:51.981    14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:34:51.981    14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:34:51.981   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3
00:34:51.981   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:51.981   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:52.241  nvme0n1
00:34:52.241   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:52.241    14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:34:52.241    14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:34:52.241    14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:52.241    14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:52.241    14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:52.241   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:34:52.241   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:34:52.241   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:52.241   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:52.241   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:52.241   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:34:52.241   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4
00:34:52.241   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:34:52.241   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:34:52.241   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:34:52.241   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4
00:34:52.241   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDg2MjlkNDQ4YTI3ZjgxNDkxOTVhZGVhOGQzNmFhNDNhZDA5NzQ5YWY1NGRiOTBhODFhOTc3NzM0NTVhZDE3NRHo+WI=:
00:34:52.241   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=
00:34:52.241   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:34:52.241   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:34:52.241   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDg2MjlkNDQ4YTI3ZjgxNDkxOTVhZGVhOGQzNmFhNDNhZDA5NzQ5YWY1NGRiOTBhODFhOTc3NzM0NTVhZDE3NRHo+WI=:
00:34:52.241   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]]
00:34:52.241   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4
00:34:52.241   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:34:52.241   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:34:52.241   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048
00:34:52.241   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4
00:34:52.241   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:34:52.241   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048
00:34:52.241   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:52.241   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:52.241   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:52.241    14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:34:52.241    14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:34:52.241    14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:34:52.241    14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:34:52.241    14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:34:52.241    14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:34:52.241    14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:34:52.241    14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:34:52.241    14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:34:52.241    14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:34:52.241    14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:34:52.241   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4
00:34:52.241   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:52.241   14:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:52.500  nvme0n1
00:34:52.500   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:52.500    14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:34:52.500    14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:34:52.500    14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:52.500    14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:52.500    14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:52.500   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:34:52.500   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:34:52.500   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:52.501   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:52.501   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:52.501   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}"
00:34:52.501   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:34:52.501   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0
00:34:52.501   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:34:52.501   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:34:52.501   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072
00:34:52.501   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0
00:34:52.501   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjE2NTU2ZjIxMmNlZjZmYjYyNDVjYTcxMWE4ZmIzMGMjU2T/:
00:34:52.501   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjM1NWM3Zjk3Zjg5ODU1NDQ5MWQ1MGMyNmQ5ZmNlNDNhZDMxZjZmZjczYzUyMTVlYTQzMmQ1MzE4ZWI4YzJiOQZiTRM=:
00:34:52.501   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:34:52.501   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072
00:34:52.501   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjE2NTU2ZjIxMmNlZjZmYjYyNDVjYTcxMWE4ZmIzMGMjU2T/:
00:34:52.501   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjM1NWM3Zjk3Zjg5ODU1NDQ5MWQ1MGMyNmQ5ZmNlNDNhZDMxZjZmZjczYzUyMTVlYTQzMmQ1MzE4ZWI4YzJiOQZiTRM=: ]]
00:34:52.501   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjM1NWM3Zjk3Zjg5ODU1NDQ5MWQ1MGMyNmQ5ZmNlNDNhZDMxZjZmZjczYzUyMTVlYTQzMmQ1MzE4ZWI4YzJiOQZiTRM=:
00:34:52.501   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0
00:34:52.501   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:34:52.501   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:34:52.501   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072
00:34:52.501   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0
00:34:52.501   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:34:52.501   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072
00:34:52.501   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:52.501   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:52.501   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:52.501    14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:34:52.501    14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:34:52.501    14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:34:52.501    14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:34:52.501    14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:34:52.501    14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:34:52.501    14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:34:52.501    14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:34:52.501    14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:34:52.501    14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:34:52.501    14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:34:52.501   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:34:52.501   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:52.501   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:52.760  nvme0n1
00:34:52.760   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:52.760    14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:34:52.760    14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:34:52.760    14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:52.760    14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:52.760    14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:52.760   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:34:52.760   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:34:52.760   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:52.760   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:53.019   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:53.019   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:34:53.019   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1
00:34:53.019   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:34:53.019   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:34:53.019   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072
00:34:53.019   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:34:53.019   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJiM2Y4NGVhZDdkODQ1ZWQyMTFlYTY2YjA1OWJjNmEyZmRiZDJjZWJhZWI3MDk546rTjQ==:
00:34:53.020   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRlYjdiZGM3Y2RiZWE0ZjNhNWY0OWIxM2Y2NGZlODliMzAyYWE0MDk4YzBjZjM4vyDF+g==:
00:34:53.020   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:34:53.020   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072
00:34:53.020   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJiM2Y4NGVhZDdkODQ1ZWQyMTFlYTY2YjA1OWJjNmEyZmRiZDJjZWJhZWI3MDk546rTjQ==:
00:34:53.020   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRlYjdiZGM3Y2RiZWE0ZjNhNWY0OWIxM2Y2NGZlODliMzAyYWE0MDk4YzBjZjM4vyDF+g==: ]]
00:34:53.020   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRlYjdiZGM3Y2RiZWE0ZjNhNWY0OWIxM2Y2NGZlODliMzAyYWE0MDk4YzBjZjM4vyDF+g==:
00:34:53.020   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1
00:34:53.020   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:34:53.020   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:34:53.020   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072
00:34:53.020   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1
00:34:53.020   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:34:53.020   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072
00:34:53.020   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:53.020   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:53.020   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:53.020    14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:34:53.020    14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:34:53.020    14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:34:53.020    14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:34:53.020    14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:34:53.020    14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:34:53.020    14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:34:53.020    14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:34:53.020    14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:34:53.020    14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:34:53.020    14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:34:53.020   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:34:53.020   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:53.020   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:53.020  nvme0n1
00:34:53.020   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:53.020    14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:34:53.020    14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:34:53.020    14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:53.020    14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:53.020    14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:53.279   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:34:53.279   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:34:53.279   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:53.279   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:53.279   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:53.279   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:34:53.279   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2
00:34:53.279   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:34:53.279   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:34:53.279   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072
00:34:53.279   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:34:53.279   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmEwNzdiMTE4ODhiZmY1YzZjZjNkNDk1NzJiMjMxNje/HjlO:
00:34:53.279   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2ZjYzRiYmY0MzVjZDBkNmZkN2QxNjA3ZDgxYmQwZWQgUIzh:
00:34:53.279   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:34:53.279   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072
00:34:53.279   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmEwNzdiMTE4ODhiZmY1YzZjZjNkNDk1NzJiMjMxNje/HjlO:
00:34:53.279   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2ZjYzRiYmY0MzVjZDBkNmZkN2QxNjA3ZDgxYmQwZWQgUIzh: ]]
00:34:53.279   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2ZjYzRiYmY0MzVjZDBkNmZkN2QxNjA3ZDgxYmQwZWQgUIzh:
00:34:53.279   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2
00:34:53.279   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:34:53.279   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:34:53.279   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072
00:34:53.279   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2
00:34:53.279   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:34:53.279   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072
00:34:53.279   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:53.279   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:53.279   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:53.279    14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:34:53.279    14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:34:53.279    14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:34:53.279    14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:34:53.279    14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:34:53.279    14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:34:53.279    14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:34:53.279    14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:34:53.279    14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:34:53.279    14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:34:53.279    14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:34:53.279   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:34:53.279   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:53.279   14:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:53.539  nvme0n1
00:34:53.539   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:53.539    14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:34:53.539    14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:34:53.539    14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:53.539    14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:53.539    14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:53.539   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:34:53.539   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:34:53.539   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:53.539   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:53.539   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:53.539   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:34:53.539   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3
00:34:53.539   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:34:53.539   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:34:53.539   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072
00:34:53.539   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3
00:34:53.539   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzMxMTViZGUxYWYxM2NmOTFlZDgxNzY4NjVlNDM0MTk3NmY4NTczZWY4ZWFhNzkx31Mvig==:
00:34:53.539   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmI5NGYxN2Q4OWVhYzcwNzJiMDVlOWJiNTVlZTJiMzUijv8R:
00:34:53.539   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:34:53.539   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072
00:34:53.539   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzMxMTViZGUxYWYxM2NmOTFlZDgxNzY4NjVlNDM0MTk3NmY4NTczZWY4ZWFhNzkx31Mvig==:
00:34:53.539   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmI5NGYxN2Q4OWVhYzcwNzJiMDVlOWJiNTVlZTJiMzUijv8R: ]]
00:34:53.539   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmI5NGYxN2Q4OWVhYzcwNzJiMDVlOWJiNTVlZTJiMzUijv8R:
00:34:53.539   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3
00:34:53.539   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:34:53.539   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:34:53.539   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072
00:34:53.539   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3
00:34:53.539   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:34:53.539   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072
00:34:53.539   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:53.539   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:53.539   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:53.539    14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:34:53.539    14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:34:53.539    14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:34:53.539    14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:34:53.539    14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:34:53.539    14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:34:53.539    14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:34:53.539    14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:34:53.539    14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:34:53.539    14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:34:53.539    14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:34:53.539   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3
00:34:53.539   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:53.539   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:53.798  nvme0n1
00:34:53.798   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:53.798    14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:34:53.798    14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:34:53.798    14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:53.798    14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:53.798    14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:53.798   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:34:53.798   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:34:53.798   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:53.798   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:53.799   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:53.799   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:34:53.799   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4
00:34:53.799   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:34:53.799   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:34:53.799   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072
00:34:53.799   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4
00:34:53.799   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDg2MjlkNDQ4YTI3ZjgxNDkxOTVhZGVhOGQzNmFhNDNhZDA5NzQ5YWY1NGRiOTBhODFhOTc3NzM0NTVhZDE3NRHo+WI=:
00:34:53.799   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=
00:34:53.799   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:34:53.799   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072
00:34:53.799   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDg2MjlkNDQ4YTI3ZjgxNDkxOTVhZGVhOGQzNmFhNDNhZDA5NzQ5YWY1NGRiOTBhODFhOTc3NzM0NTVhZDE3NRHo+WI=:
00:34:53.799   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]]
00:34:53.799   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4
00:34:53.799   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:34:53.799   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:34:53.799   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072
00:34:53.799   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4
00:34:53.799   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:34:53.799   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072
00:34:53.799   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:53.799   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:53.799   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:53.799    14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:34:53.799    14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:34:53.799    14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:34:53.799    14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:34:53.799    14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:34:53.799    14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:34:53.799    14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:34:53.799    14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:34:53.799    14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:34:53.799    14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:34:53.799    14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:34:53.799   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4
00:34:53.799   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:53.799   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:54.058  nvme0n1
00:34:54.058   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:54.058    14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:34:54.058    14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:34:54.058    14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:54.058    14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:54.058    14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:54.058   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:34:54.058   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:34:54.058   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:54.058   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:54.317   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:54.317   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}"
00:34:54.317   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:34:54.317   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0
00:34:54.317   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:34:54.317   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:34:54.317   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096
00:34:54.317   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0
00:34:54.317   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjE2NTU2ZjIxMmNlZjZmYjYyNDVjYTcxMWE4ZmIzMGMjU2T/:
00:34:54.317   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjM1NWM3Zjk3Zjg5ODU1NDQ5MWQ1MGMyNmQ5ZmNlNDNhZDMxZjZmZjczYzUyMTVlYTQzMmQ1MzE4ZWI4YzJiOQZiTRM=:
00:34:54.317   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:34:54.317   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096
00:34:54.317   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjE2NTU2ZjIxMmNlZjZmYjYyNDVjYTcxMWE4ZmIzMGMjU2T/:
00:34:54.317   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjM1NWM3Zjk3Zjg5ODU1NDQ5MWQ1MGMyNmQ5ZmNlNDNhZDMxZjZmZjczYzUyMTVlYTQzMmQ1MzE4ZWI4YzJiOQZiTRM=: ]]
00:34:54.317   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjM1NWM3Zjk3Zjg5ODU1NDQ5MWQ1MGMyNmQ5ZmNlNDNhZDMxZjZmZjczYzUyMTVlYTQzMmQ1MzE4ZWI4YzJiOQZiTRM=:
00:34:54.317   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0
00:34:54.317   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:34:54.317   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:34:54.317   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096
00:34:54.317   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0
00:34:54.317   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:34:54.317   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096
00:34:54.317   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:54.317   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:54.317   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:54.317    14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:34:54.317    14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:34:54.317    14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:34:54.317    14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:34:54.317    14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:34:54.317    14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:34:54.317    14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:34:54.317    14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:34:54.317    14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:34:54.317    14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:34:54.317    14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:34:54.317   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:34:54.317   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:54.317   14:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:54.577  nvme0n1
00:34:54.577   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:54.577    14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:34:54.577    14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:34:54.577    14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:54.577    14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:54.577    14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:54.577   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:34:54.577   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:34:54.577   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:54.577   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:54.577   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:54.577   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:34:54.577   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1
00:34:54.577   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:34:54.577   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:34:54.577   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096
00:34:54.577   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:34:54.577   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJiM2Y4NGVhZDdkODQ1ZWQyMTFlYTY2YjA1OWJjNmEyZmRiZDJjZWJhZWI3MDk546rTjQ==:
00:34:54.577   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRlYjdiZGM3Y2RiZWE0ZjNhNWY0OWIxM2Y2NGZlODliMzAyYWE0MDk4YzBjZjM4vyDF+g==:
00:34:54.577   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:34:54.577   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096
00:34:54.577   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJiM2Y4NGVhZDdkODQ1ZWQyMTFlYTY2YjA1OWJjNmEyZmRiZDJjZWJhZWI3MDk546rTjQ==:
00:34:54.577   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRlYjdiZGM3Y2RiZWE0ZjNhNWY0OWIxM2Y2NGZlODliMzAyYWE0MDk4YzBjZjM4vyDF+g==: ]]
00:34:54.577   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRlYjdiZGM3Y2RiZWE0ZjNhNWY0OWIxM2Y2NGZlODliMzAyYWE0MDk4YzBjZjM4vyDF+g==:
00:34:54.577   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1
00:34:54.577   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:34:54.577   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:34:54.577   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096
00:34:54.577   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1
00:34:54.577   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:34:54.577   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096
00:34:54.577   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:54.577   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:54.577   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:54.577    14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:34:54.577    14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:34:54.577    14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:34:54.577    14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:34:54.577    14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:34:54.577    14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:34:54.577    14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:34:54.577    14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:34:54.577    14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:34:54.577    14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:34:54.577    14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:34:54.577   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:34:54.577   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:54.577   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:54.836  nvme0n1
00:34:54.836   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:54.836    14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:34:54.836    14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:34:54.836    14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:54.836    14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:54.836    14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:54.836   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:34:54.836   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:34:54.836   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:54.836   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:55.095   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:55.095   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:34:55.095   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2
00:34:55.095   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:34:55.095   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:34:55.095   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096
00:34:55.095   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:34:55.095   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmEwNzdiMTE4ODhiZmY1YzZjZjNkNDk1NzJiMjMxNje/HjlO:
00:34:55.095   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2ZjYzRiYmY0MzVjZDBkNmZkN2QxNjA3ZDgxYmQwZWQgUIzh:
00:34:55.095   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:34:55.095   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096
00:34:55.095   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmEwNzdiMTE4ODhiZmY1YzZjZjNkNDk1NzJiMjMxNje/HjlO:
00:34:55.095   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2ZjYzRiYmY0MzVjZDBkNmZkN2QxNjA3ZDgxYmQwZWQgUIzh: ]]
00:34:55.095   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2ZjYzRiYmY0MzVjZDBkNmZkN2QxNjA3ZDgxYmQwZWQgUIzh:
00:34:55.095   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2
00:34:55.095   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:34:55.095   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:34:55.095   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096
00:34:55.095   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2
00:34:55.096   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:34:55.096   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096
00:34:55.096   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:55.096   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:55.096   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:55.096    14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:34:55.096    14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:34:55.096    14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:34:55.096    14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:34:55.096    14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:34:55.096    14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:34:55.096    14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:34:55.096    14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:34:55.096    14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:34:55.096    14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:34:55.096    14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:34:55.096   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:34:55.096   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:55.096   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:55.355  nvme0n1
00:34:55.355   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:55.355    14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:34:55.355    14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:55.355    14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:55.355    14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:34:55.355    14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:55.355   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:34:55.355   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:34:55.355   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:55.355   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:55.355   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:55.355   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:34:55.355   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3
00:34:55.355   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:34:55.355   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:34:55.355   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096
00:34:55.355   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3
00:34:55.355   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzMxMTViZGUxYWYxM2NmOTFlZDgxNzY4NjVlNDM0MTk3NmY4NTczZWY4ZWFhNzkx31Mvig==:
00:34:55.355   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmI5NGYxN2Q4OWVhYzcwNzJiMDVlOWJiNTVlZTJiMzUijv8R:
00:34:55.355   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:34:55.355   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096
00:34:55.355   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzMxMTViZGUxYWYxM2NmOTFlZDgxNzY4NjVlNDM0MTk3NmY4NTczZWY4ZWFhNzkx31Mvig==:
00:34:55.355   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmI5NGYxN2Q4OWVhYzcwNzJiMDVlOWJiNTVlZTJiMzUijv8R: ]]
00:34:55.355   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmI5NGYxN2Q4OWVhYzcwNzJiMDVlOWJiNTVlZTJiMzUijv8R:
00:34:55.355   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3
00:34:55.355   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:34:55.355   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:34:55.355   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096
00:34:55.355   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3
00:34:55.355   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:34:55.355   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096
00:34:55.355   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:55.355   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:55.355   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:55.355    14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:34:55.355    14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:34:55.355    14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:34:55.355    14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:34:55.355    14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:34:55.355    14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:34:55.355    14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:34:55.355    14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:34:55.355    14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:34:55.355    14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:34:55.355    14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:34:55.355   14:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3
00:34:55.355   14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:55.355   14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:55.614  nvme0n1
00:34:55.614   14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:55.614    14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:34:55.614    14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:34:55.614    14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:55.614    14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:55.614    14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:55.614   14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:34:55.614   14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:34:55.614   14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:55.614   14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:55.872   14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:55.872   14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:34:55.872   14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4
00:34:55.872   14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:34:55.872   14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:34:55.872   14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096
00:34:55.872   14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4
00:34:55.872   14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDg2MjlkNDQ4YTI3ZjgxNDkxOTVhZGVhOGQzNmFhNDNhZDA5NzQ5YWY1NGRiOTBhODFhOTc3NzM0NTVhZDE3NRHo+WI=:
00:34:55.872   14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=
00:34:55.872   14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:34:55.872   14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096
00:34:55.872   14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDg2MjlkNDQ4YTI3ZjgxNDkxOTVhZGVhOGQzNmFhNDNhZDA5NzQ5YWY1NGRiOTBhODFhOTc3NzM0NTVhZDE3NRHo+WI=:
00:34:55.872   14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]]
00:34:55.872   14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4
00:34:55.872   14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:34:55.872   14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:34:55.872   14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096
00:34:55.872   14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4
00:34:55.872   14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:34:55.872   14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096
00:34:55.872   14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:55.872   14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:55.872   14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:55.872    14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:34:55.873    14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:34:55.873    14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:34:55.873    14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:34:55.873    14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:34:55.873    14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:34:55.873    14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:34:55.873    14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:34:55.873    14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:34:55.873    14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:34:55.873    14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:34:55.873   14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4
00:34:55.873   14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:55.873   14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:56.132  nvme0n1
00:34:56.132   14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:56.132    14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:34:56.132    14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:34:56.132    14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:56.132    14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:56.132    14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:56.132   14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:34:56.132   14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:34:56.132   14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:56.132   14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:56.132   14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:56.132   14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}"
00:34:56.132   14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:34:56.132   14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0
00:34:56.132   14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:34:56.132   14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:34:56.132   14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144
00:34:56.132   14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0
00:34:56.132   14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjE2NTU2ZjIxMmNlZjZmYjYyNDVjYTcxMWE4ZmIzMGMjU2T/:
00:34:56.132   14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjM1NWM3Zjk3Zjg5ODU1NDQ5MWQ1MGMyNmQ5ZmNlNDNhZDMxZjZmZjczYzUyMTVlYTQzMmQ1MzE4ZWI4YzJiOQZiTRM=:
00:34:56.132   14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:34:56.132   14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144
00:34:56.132   14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjE2NTU2ZjIxMmNlZjZmYjYyNDVjYTcxMWE4ZmIzMGMjU2T/:
00:34:56.132   14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjM1NWM3Zjk3Zjg5ODU1NDQ5MWQ1MGMyNmQ5ZmNlNDNhZDMxZjZmZjczYzUyMTVlYTQzMmQ1MzE4ZWI4YzJiOQZiTRM=: ]]
00:34:56.132   14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjM1NWM3Zjk3Zjg5ODU1NDQ5MWQ1MGMyNmQ5ZmNlNDNhZDMxZjZmZjczYzUyMTVlYTQzMmQ1MzE4ZWI4YzJiOQZiTRM=:
00:34:56.132   14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0
00:34:56.132   14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:34:56.132   14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:34:56.132   14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144
00:34:56.132   14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0
00:34:56.132   14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:34:56.132   14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144
00:34:56.132   14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:56.132   14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:56.132   14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:56.132    14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:34:56.132    14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:34:56.132    14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:34:56.132    14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:34:56.132    14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:34:56.132    14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:34:56.132    14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:34:56.132    14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:34:56.132    14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:34:56.132    14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:34:56.132    14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:34:56.132   14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:34:56.133   14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:56.133   14:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:56.701  nvme0n1
00:34:56.701   14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:56.701    14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:34:56.701    14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:34:56.701    14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:56.701    14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:56.701    14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:56.701   14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:34:56.701   14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:34:56.701   14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:56.701   14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:56.701   14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:56.701   14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:34:56.701   14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1
00:34:56.701   14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:34:56.701   14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:34:56.701   14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144
00:34:56.701   14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:34:56.701   14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJiM2Y4NGVhZDdkODQ1ZWQyMTFlYTY2YjA1OWJjNmEyZmRiZDJjZWJhZWI3MDk546rTjQ==:
00:34:56.701   14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRlYjdiZGM3Y2RiZWE0ZjNhNWY0OWIxM2Y2NGZlODliMzAyYWE0MDk4YzBjZjM4vyDF+g==:
00:34:56.701   14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:34:56.701   14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144
00:34:56.701   14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJiM2Y4NGVhZDdkODQ1ZWQyMTFlYTY2YjA1OWJjNmEyZmRiZDJjZWJhZWI3MDk546rTjQ==:
00:34:56.701   14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRlYjdiZGM3Y2RiZWE0ZjNhNWY0OWIxM2Y2NGZlODliMzAyYWE0MDk4YzBjZjM4vyDF+g==: ]]
00:34:56.701   14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRlYjdiZGM3Y2RiZWE0ZjNhNWY0OWIxM2Y2NGZlODliMzAyYWE0MDk4YzBjZjM4vyDF+g==:
00:34:56.701   14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1
00:34:56.701   14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:34:56.701   14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:34:56.701   14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144
00:34:56.701   14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1
00:34:56.701   14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:34:56.701   14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144
00:34:56.701   14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:56.701   14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:56.701   14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:56.701    14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:34:56.701    14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:34:56.701    14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:34:56.701    14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:34:56.701    14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:34:56.701    14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:34:56.701    14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:34:56.701    14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:34:56.701    14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:34:56.701    14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:34:56.701    14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:34:56.701   14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:34:56.701   14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:56.701   14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:57.270  nvme0n1
00:34:57.270   14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:57.270    14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:34:57.270    14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:34:57.270    14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:57.270    14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:57.270    14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:57.270   14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:34:57.270   14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:34:57.270   14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:57.270   14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:57.270   14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:57.270   14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:34:57.270   14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2
00:34:57.270   14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:34:57.270   14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:34:57.270   14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144
00:34:57.270   14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:34:57.270   14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmEwNzdiMTE4ODhiZmY1YzZjZjNkNDk1NzJiMjMxNje/HjlO:
00:34:57.270   14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2ZjYzRiYmY0MzVjZDBkNmZkN2QxNjA3ZDgxYmQwZWQgUIzh:
00:34:57.270   14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:34:57.271   14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144
00:34:57.271   14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmEwNzdiMTE4ODhiZmY1YzZjZjNkNDk1NzJiMjMxNje/HjlO:
00:34:57.271   14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2ZjYzRiYmY0MzVjZDBkNmZkN2QxNjA3ZDgxYmQwZWQgUIzh: ]]
00:34:57.271   14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2ZjYzRiYmY0MzVjZDBkNmZkN2QxNjA3ZDgxYmQwZWQgUIzh:
00:34:57.271   14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2
00:34:57.271   14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:34:57.271   14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:34:57.271   14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144
00:34:57.271   14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2
00:34:57.271   14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:34:57.271   14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144
00:34:57.271   14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:57.271   14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:57.271   14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:57.271    14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:34:57.271    14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:34:57.271    14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:34:57.271    14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:34:57.271    14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:34:57.271    14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:34:57.271    14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:34:57.271    14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:34:57.271    14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:34:57.271    14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:34:57.271    14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:34:57.271   14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:34:57.271   14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:57.271   14:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:57.530  nvme0n1
00:34:57.530   14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:57.530    14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:34:57.530    14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:57.530    14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:57.530    14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:34:57.530    14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:57.789   14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:34:57.789   14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:34:57.789   14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:57.789   14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:57.789   14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:57.789   14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:34:57.789   14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3
00:34:57.789   14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:34:57.789   14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:34:57.789   14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144
00:34:57.789   14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3
00:34:57.789   14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzMxMTViZGUxYWYxM2NmOTFlZDgxNzY4NjVlNDM0MTk3NmY4NTczZWY4ZWFhNzkx31Mvig==:
00:34:57.789   14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmI5NGYxN2Q4OWVhYzcwNzJiMDVlOWJiNTVlZTJiMzUijv8R:
00:34:57.789   14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:34:57.789   14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144
00:34:57.790   14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzMxMTViZGUxYWYxM2NmOTFlZDgxNzY4NjVlNDM0MTk3NmY4NTczZWY4ZWFhNzkx31Mvig==:
00:34:57.790   14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmI5NGYxN2Q4OWVhYzcwNzJiMDVlOWJiNTVlZTJiMzUijv8R: ]]
00:34:57.790   14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmI5NGYxN2Q4OWVhYzcwNzJiMDVlOWJiNTVlZTJiMzUijv8R:
00:34:57.790   14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3
00:34:57.790   14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:34:57.790   14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:34:57.790   14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144
00:34:57.790   14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3
00:34:57.790   14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:34:57.790   14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144
00:34:57.790   14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:57.790   14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:57.790   14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:57.790    14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:34:57.790    14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:34:57.790    14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:34:57.790    14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:34:57.790    14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:34:57.790    14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:34:57.790    14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:34:57.790    14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:34:57.790    14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:34:57.790    14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:34:57.790    14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:34:57.790   14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3
00:34:57.790   14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:57.790   14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:58.049  nvme0n1
00:34:58.049   14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:58.049    14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:34:58.049    14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:34:58.049    14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:58.049    14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:58.049    14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:58.308   14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:34:58.308   14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:34:58.308   14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:58.308   14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:58.308   14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:58.308   14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:34:58.308   14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4
00:34:58.308   14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:34:58.308   14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:34:58.308   14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144
00:34:58.308   14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4
00:34:58.308   14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDg2MjlkNDQ4YTI3ZjgxNDkxOTVhZGVhOGQzNmFhNDNhZDA5NzQ5YWY1NGRiOTBhODFhOTc3NzM0NTVhZDE3NRHo+WI=:
00:34:58.308   14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=
00:34:58.308   14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:34:58.308   14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144
00:34:58.308   14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDg2MjlkNDQ4YTI3ZjgxNDkxOTVhZGVhOGQzNmFhNDNhZDA5NzQ5YWY1NGRiOTBhODFhOTc3NzM0NTVhZDE3NRHo+WI=:
00:34:58.308   14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]]
00:34:58.308   14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4
00:34:58.308   14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:34:58.308   14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:34:58.308   14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144
00:34:58.308   14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4
00:34:58.308   14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:34:58.308   14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144
00:34:58.308   14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:58.308   14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:58.308   14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:58.308    14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:34:58.308    14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:34:58.308    14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:34:58.308    14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:34:58.308    14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:34:58.308    14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:34:58.308    14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:34:58.308    14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:34:58.308    14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:34:58.309    14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:34:58.309    14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:34:58.309   14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4
00:34:58.309   14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:58.309   14:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:58.567  nvme0n1
00:34:58.567   14:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:58.568    14:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:34:58.568    14:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:58.568    14:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:34:58.568    14:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:58.568    14:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:58.826   14:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:34:58.826   14:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:34:58.826   14:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:58.826   14:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:58.826   14:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:58.826   14:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}"
00:34:58.826   14:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:34:58.827   14:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0
00:34:58.827   14:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:34:58.827   14:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:34:58.827   14:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192
00:34:58.827   14:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0
00:34:58.827   14:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjE2NTU2ZjIxMmNlZjZmYjYyNDVjYTcxMWE4ZmIzMGMjU2T/:
00:34:58.827   14:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjM1NWM3Zjk3Zjg5ODU1NDQ5MWQ1MGMyNmQ5ZmNlNDNhZDMxZjZmZjczYzUyMTVlYTQzMmQ1MzE4ZWI4YzJiOQZiTRM=:
00:34:58.827   14:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:34:58.827   14:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192
00:34:58.827   14:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjE2NTU2ZjIxMmNlZjZmYjYyNDVjYTcxMWE4ZmIzMGMjU2T/:
00:34:58.827   14:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjM1NWM3Zjk3Zjg5ODU1NDQ5MWQ1MGMyNmQ5ZmNlNDNhZDMxZjZmZjczYzUyMTVlYTQzMmQ1MzE4ZWI4YzJiOQZiTRM=: ]]
00:34:58.827   14:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjM1NWM3Zjk3Zjg5ODU1NDQ5MWQ1MGMyNmQ5ZmNlNDNhZDMxZjZmZjczYzUyMTVlYTQzMmQ1MzE4ZWI4YzJiOQZiTRM=:
00:34:58.827   14:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0
00:34:58.827   14:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:34:58.827   14:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:34:58.827   14:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192
00:34:58.827   14:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0
00:34:58.827   14:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:34:58.827   14:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192
00:34:58.827   14:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:58.827   14:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:58.827   14:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:58.827    14:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:34:58.827    14:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:34:58.827    14:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:34:58.827    14:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:34:58.827    14:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:34:58.827    14:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:34:58.827    14:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:34:58.827    14:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:34:58.827    14:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:34:58.827    14:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:34:58.827    14:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:34:58.827   14:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:34:58.827   14:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:58.827   14:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:59.395  nvme0n1
00:34:59.395   14:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:59.395    14:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:34:59.395    14:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:34:59.395    14:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:59.395    14:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:59.395    14:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:59.395   14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:34:59.395   14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:34:59.395   14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:59.395   14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:59.395   14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:59.395   14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:34:59.395   14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1
00:34:59.395   14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:34:59.395   14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:34:59.395   14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192
00:34:59.395   14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:34:59.395   14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJiM2Y4NGVhZDdkODQ1ZWQyMTFlYTY2YjA1OWJjNmEyZmRiZDJjZWJhZWI3MDk546rTjQ==:
00:34:59.395   14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRlYjdiZGM3Y2RiZWE0ZjNhNWY0OWIxM2Y2NGZlODliMzAyYWE0MDk4YzBjZjM4vyDF+g==:
00:34:59.395   14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:34:59.395   14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192
00:34:59.395   14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJiM2Y4NGVhZDdkODQ1ZWQyMTFlYTY2YjA1OWJjNmEyZmRiZDJjZWJhZWI3MDk546rTjQ==:
00:34:59.395   14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRlYjdiZGM3Y2RiZWE0ZjNhNWY0OWIxM2Y2NGZlODliMzAyYWE0MDk4YzBjZjM4vyDF+g==: ]]
00:34:59.395   14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRlYjdiZGM3Y2RiZWE0ZjNhNWY0OWIxM2Y2NGZlODliMzAyYWE0MDk4YzBjZjM4vyDF+g==:
00:34:59.395   14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1
00:34:59.395   14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:34:59.395   14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:34:59.395   14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192
00:34:59.395   14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1
00:34:59.395   14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:34:59.395   14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192
00:34:59.395   14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:59.395   14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:59.395   14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:59.395    14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:34:59.395    14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:34:59.395    14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:34:59.395    14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:34:59.395    14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:34:59.395    14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:34:59.395    14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:34:59.395    14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:34:59.395    14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:34:59.395    14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:34:59.395    14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:34:59.395   14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:34:59.395   14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:59.395   14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:59.963  nvme0n1
00:34:59.963   14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:59.963    14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:34:59.963    14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:34:59.963    14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:59.963    14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:34:59.963    14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:00.223   14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:35:00.223   14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:35:00.223   14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:00.223   14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:00.223   14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:00.223   14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:35:00.223   14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2
00:35:00.223   14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:35:00.223   14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:35:00.223   14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192
00:35:00.223   14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:35:00.223   14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmEwNzdiMTE4ODhiZmY1YzZjZjNkNDk1NzJiMjMxNje/HjlO:
00:35:00.223   14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2ZjYzRiYmY0MzVjZDBkNmZkN2QxNjA3ZDgxYmQwZWQgUIzh:
00:35:00.223   14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:35:00.223   14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192
00:35:00.223   14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmEwNzdiMTE4ODhiZmY1YzZjZjNkNDk1NzJiMjMxNje/HjlO:
00:35:00.223   14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2ZjYzRiYmY0MzVjZDBkNmZkN2QxNjA3ZDgxYmQwZWQgUIzh: ]]
00:35:00.223   14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2ZjYzRiYmY0MzVjZDBkNmZkN2QxNjA3ZDgxYmQwZWQgUIzh:
00:35:00.223   14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2
00:35:00.223   14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:35:00.223   14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:35:00.223   14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192
00:35:00.223   14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2
00:35:00.223   14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:35:00.223   14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192
00:35:00.223   14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:00.223   14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:00.223   14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:00.223    14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:35:00.223    14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:35:00.223    14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:35:00.223    14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:35:00.223    14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:35:00.223    14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:35:00.223    14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:35:00.223    14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:35:00.223    14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:35:00.223    14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:35:00.223    14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:35:00.223   14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:35:00.223   14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:00.223   14:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:00.792  nvme0n1
00:35:00.792   14:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:00.792    14:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:35:00.792    14:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:35:00.792    14:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:00.792    14:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:00.792    14:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:00.792   14:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:35:00.792   14:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:35:00.792   14:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:00.792   14:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:00.792   14:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:00.792   14:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:35:00.792   14:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3
00:35:00.792   14:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:35:00.792   14:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:35:00.792   14:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192
00:35:00.792   14:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3
00:35:00.792   14:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzMxMTViZGUxYWYxM2NmOTFlZDgxNzY4NjVlNDM0MTk3NmY4NTczZWY4ZWFhNzkx31Mvig==:
00:35:00.792   14:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmI5NGYxN2Q4OWVhYzcwNzJiMDVlOWJiNTVlZTJiMzUijv8R:
00:35:00.792   14:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:35:00.792   14:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192
00:35:00.792   14:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzMxMTViZGUxYWYxM2NmOTFlZDgxNzY4NjVlNDM0MTk3NmY4NTczZWY4ZWFhNzkx31Mvig==:
00:35:00.792   14:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmI5NGYxN2Q4OWVhYzcwNzJiMDVlOWJiNTVlZTJiMzUijv8R: ]]
00:35:00.792   14:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmI5NGYxN2Q4OWVhYzcwNzJiMDVlOWJiNTVlZTJiMzUijv8R:
00:35:00.792   14:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3
00:35:00.792   14:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:35:00.792   14:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:35:00.792   14:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192
00:35:00.792   14:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3
00:35:00.792   14:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:35:00.792   14:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192
00:35:00.792   14:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:00.792   14:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:00.792   14:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:00.792    14:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:35:00.792    14:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:35:00.792    14:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:35:00.792    14:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:35:00.792    14:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:35:00.792    14:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:35:00.792    14:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:35:00.792    14:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:35:00.792    14:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:35:00.792    14:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:35:00.792    14:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:35:00.792   14:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3
00:35:00.792   14:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:00.792   14:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:01.360  nvme0n1
00:35:01.360   14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:01.360    14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:35:01.360    14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:35:01.360    14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:01.360    14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:01.360    14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:01.619   14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:35:01.619   14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:35:01.619   14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:01.619   14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:01.619   14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:01.619   14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:35:01.619   14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4
00:35:01.619   14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:35:01.619   14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:35:01.619   14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192
00:35:01.619   14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4
00:35:01.619   14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDg2MjlkNDQ4YTI3ZjgxNDkxOTVhZGVhOGQzNmFhNDNhZDA5NzQ5YWY1NGRiOTBhODFhOTc3NzM0NTVhZDE3NRHo+WI=:
00:35:01.619   14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=
00:35:01.619   14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:35:01.619   14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192
00:35:01.619   14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDg2MjlkNDQ4YTI3ZjgxNDkxOTVhZGVhOGQzNmFhNDNhZDA5NzQ5YWY1NGRiOTBhODFhOTc3NzM0NTVhZDE3NRHo+WI=:
00:35:01.619   14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]]
00:35:01.619   14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4
00:35:01.619   14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:35:01.619   14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:35:01.619   14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192
00:35:01.619   14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4
00:35:01.620   14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:35:01.620   14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192
00:35:01.620   14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:01.620   14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:01.620   14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:01.620    14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:35:01.620    14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:35:01.620    14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:35:01.620    14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:35:01.620    14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:35:01.620    14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:35:01.620    14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:35:01.620    14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:35:01.620    14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:35:01.620    14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:35:01.620    14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:35:01.620   14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4
00:35:01.620   14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:01.620   14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:02.188  nvme0n1
00:35:02.188   14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:02.189    14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:35:02.189    14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:35:02.189    14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:02.189    14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:02.189    14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:02.189   14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:35:02.189   14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:35:02.189   14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:02.189   14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:02.189   14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:02.189   14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}"
00:35:02.189   14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}"
00:35:02.189   14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:35:02.189   14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0
00:35:02.189   14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:35:02.189   14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:35:02.189   14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:35:02.189   14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0
00:35:02.189   14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjE2NTU2ZjIxMmNlZjZmYjYyNDVjYTcxMWE4ZmIzMGMjU2T/:
00:35:02.189   14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjM1NWM3Zjk3Zjg5ODU1NDQ5MWQ1MGMyNmQ5ZmNlNDNhZDMxZjZmZjczYzUyMTVlYTQzMmQ1MzE4ZWI4YzJiOQZiTRM=:
00:35:02.189   14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:35:02.189   14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:35:02.189   14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjE2NTU2ZjIxMmNlZjZmYjYyNDVjYTcxMWE4ZmIzMGMjU2T/:
00:35:02.189   14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjM1NWM3Zjk3Zjg5ODU1NDQ5MWQ1MGMyNmQ5ZmNlNDNhZDMxZjZmZjczYzUyMTVlYTQzMmQ1MzE4ZWI4YzJiOQZiTRM=: ]]
00:35:02.189   14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjM1NWM3Zjk3Zjg5ODU1NDQ5MWQ1MGMyNmQ5ZmNlNDNhZDMxZjZmZjczYzUyMTVlYTQzMmQ1MzE4ZWI4YzJiOQZiTRM=:
00:35:02.189   14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0
00:35:02.189   14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:35:02.189   14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:35:02.189   14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048
00:35:02.189   14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0
00:35:02.189   14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:35:02.189   14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048
00:35:02.189   14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:02.189   14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:02.189   14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:02.189    14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:35:02.189    14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:35:02.189    14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:35:02.189    14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:35:02.189    14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:35:02.189    14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:35:02.189    14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:35:02.189    14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:35:02.189    14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:35:02.189    14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:35:02.189    14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:35:02.189   14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:35:02.189   14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:02.189   14:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:02.449  nvme0n1
00:35:02.449   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:02.449    14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:35:02.449    14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:02.449    14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:35:02.449    14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:02.449    14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:02.449   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:35:02.449   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:35:02.449   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:02.449   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:02.449   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:02.449   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:35:02.449   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1
00:35:02.449   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:35:02.449   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:35:02.449   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:35:02.449   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:35:02.449   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJiM2Y4NGVhZDdkODQ1ZWQyMTFlYTY2YjA1OWJjNmEyZmRiZDJjZWJhZWI3MDk546rTjQ==:
00:35:02.449   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRlYjdiZGM3Y2RiZWE0ZjNhNWY0OWIxM2Y2NGZlODliMzAyYWE0MDk4YzBjZjM4vyDF+g==:
00:35:02.449   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:35:02.449   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:35:02.449   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJiM2Y4NGVhZDdkODQ1ZWQyMTFlYTY2YjA1OWJjNmEyZmRiZDJjZWJhZWI3MDk546rTjQ==:
00:35:02.449   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRlYjdiZGM3Y2RiZWE0ZjNhNWY0OWIxM2Y2NGZlODliMzAyYWE0MDk4YzBjZjM4vyDF+g==: ]]
00:35:02.449   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRlYjdiZGM3Y2RiZWE0ZjNhNWY0OWIxM2Y2NGZlODliMzAyYWE0MDk4YzBjZjM4vyDF+g==:
00:35:02.449   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1
00:35:02.449   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:35:02.449   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:35:02.449   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048
00:35:02.449   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1
00:35:02.449   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:35:02.449   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048
00:35:02.449   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:02.449   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:02.449   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:02.449    14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:35:02.449    14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:35:02.449    14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:35:02.449    14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:35:02.449    14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:35:02.449    14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:35:02.449    14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:35:02.449    14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:35:02.449    14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:35:02.449    14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:35:02.449    14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:35:02.449   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:35:02.449   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:02.449   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:02.709  nvme0n1
00:35:02.709   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:02.709    14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:35:02.709    14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:35:02.709    14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:02.709    14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:02.709    14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:02.709   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:35:02.709   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:35:02.709   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:02.709   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:02.709   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:02.709   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:35:02.709   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2
00:35:02.709   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:35:02.709   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:35:02.709   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:35:02.709   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:35:02.709   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmEwNzdiMTE4ODhiZmY1YzZjZjNkNDk1NzJiMjMxNje/HjlO:
00:35:02.709   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2ZjYzRiYmY0MzVjZDBkNmZkN2QxNjA3ZDgxYmQwZWQgUIzh:
00:35:02.709   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:35:02.709   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:35:02.709   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmEwNzdiMTE4ODhiZmY1YzZjZjNkNDk1NzJiMjMxNje/HjlO:
00:35:02.709   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2ZjYzRiYmY0MzVjZDBkNmZkN2QxNjA3ZDgxYmQwZWQgUIzh: ]]
00:35:02.709   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2ZjYzRiYmY0MzVjZDBkNmZkN2QxNjA3ZDgxYmQwZWQgUIzh:
00:35:02.709   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2
00:35:02.709   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:35:02.709   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:35:02.709   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048
00:35:02.709   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2
00:35:02.709   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:35:02.709   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048
00:35:02.709   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:02.709   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:02.969   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:02.969    14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:35:02.969    14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:35:02.969    14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:35:02.969    14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:35:02.969    14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:35:02.969    14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:35:02.969    14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:35:02.969    14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:35:02.969    14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:35:02.969    14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:35:02.969    14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:35:02.969   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:35:02.969   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:02.969   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:02.969  nvme0n1
00:35:02.969   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:02.969    14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:35:02.969    14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:35:02.969    14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:02.969    14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:02.969    14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:02.969   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:35:02.969   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:35:02.969   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:02.969   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:03.228   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:03.228   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:35:03.228   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3
00:35:03.228   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:35:03.228   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:35:03.228   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:35:03.228   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3
00:35:03.228   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzMxMTViZGUxYWYxM2NmOTFlZDgxNzY4NjVlNDM0MTk3NmY4NTczZWY4ZWFhNzkx31Mvig==:
00:35:03.228   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmI5NGYxN2Q4OWVhYzcwNzJiMDVlOWJiNTVlZTJiMzUijv8R:
00:35:03.228   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:35:03.228   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:35:03.228   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzMxMTViZGUxYWYxM2NmOTFlZDgxNzY4NjVlNDM0MTk3NmY4NTczZWY4ZWFhNzkx31Mvig==:
00:35:03.228   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmI5NGYxN2Q4OWVhYzcwNzJiMDVlOWJiNTVlZTJiMzUijv8R: ]]
00:35:03.228   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmI5NGYxN2Q4OWVhYzcwNzJiMDVlOWJiNTVlZTJiMzUijv8R:
00:35:03.228   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3
00:35:03.228   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:35:03.228   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:35:03.228   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048
00:35:03.228   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3
00:35:03.228   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:35:03.228   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048
00:35:03.228   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:03.229   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:03.229   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:03.229    14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:35:03.229    14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:35:03.229    14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:35:03.229    14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:35:03.229    14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:35:03.229    14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:35:03.229    14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:35:03.229    14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:35:03.229    14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:35:03.229    14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:35:03.229    14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:35:03.229   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3
00:35:03.229   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:03.229   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:03.229  nvme0n1
00:35:03.229   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:03.229    14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:35:03.229    14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:03.229    14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:35:03.229    14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:03.488    14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:03.488   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:35:03.488   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:35:03.488   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:03.488   14:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:03.488   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:03.488   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:35:03.488   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4
00:35:03.488   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:35:03.488   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:35:03.488   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:35:03.488   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4
00:35:03.488   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDg2MjlkNDQ4YTI3ZjgxNDkxOTVhZGVhOGQzNmFhNDNhZDA5NzQ5YWY1NGRiOTBhODFhOTc3NzM0NTVhZDE3NRHo+WI=:
00:35:03.488   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=
00:35:03.488   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:35:03.488   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:35:03.488   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDg2MjlkNDQ4YTI3ZjgxNDkxOTVhZGVhOGQzNmFhNDNhZDA5NzQ5YWY1NGRiOTBhODFhOTc3NzM0NTVhZDE3NRHo+WI=:
00:35:03.488   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]]
00:35:03.488   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4
00:35:03.488   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:35:03.488   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:35:03.488   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048
00:35:03.488   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4
00:35:03.488   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:35:03.488   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048
00:35:03.488   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:03.488   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:03.488   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:03.488    14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:35:03.488    14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:35:03.488    14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:35:03.488    14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:35:03.488    14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:35:03.488    14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:35:03.488    14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:35:03.488    14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:35:03.488    14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:35:03.488    14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:35:03.488    14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:35:03.488   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4
00:35:03.488   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:03.488   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:03.747  nvme0n1
00:35:03.747   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:03.747    14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:35:03.747    14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:35:03.747    14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:03.748    14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:03.748    14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:03.748   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:35:03.748   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:35:03.748   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:03.748   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:03.748   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:03.748   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}"
00:35:03.748   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:35:03.748   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0
00:35:03.748   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:35:03.748   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:35:03.748   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072
00:35:03.748   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0
00:35:03.748   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjE2NTU2ZjIxMmNlZjZmYjYyNDVjYTcxMWE4ZmIzMGMjU2T/:
00:35:03.748   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjM1NWM3Zjk3Zjg5ODU1NDQ5MWQ1MGMyNmQ5ZmNlNDNhZDMxZjZmZjczYzUyMTVlYTQzMmQ1MzE4ZWI4YzJiOQZiTRM=:
00:35:03.748   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:35:03.748   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072
00:35:03.748   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjE2NTU2ZjIxMmNlZjZmYjYyNDVjYTcxMWE4ZmIzMGMjU2T/:
00:35:03.748   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjM1NWM3Zjk3Zjg5ODU1NDQ5MWQ1MGMyNmQ5ZmNlNDNhZDMxZjZmZjczYzUyMTVlYTQzMmQ1MzE4ZWI4YzJiOQZiTRM=: ]]
00:35:03.748   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjM1NWM3Zjk3Zjg5ODU1NDQ5MWQ1MGMyNmQ5ZmNlNDNhZDMxZjZmZjczYzUyMTVlYTQzMmQ1MzE4ZWI4YzJiOQZiTRM=:
00:35:03.748   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0
00:35:03.748   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:35:03.748   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:35:03.748   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072
00:35:03.748   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0
00:35:03.748   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:35:03.748   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072
00:35:03.748   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:03.748   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:03.748   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:03.748    14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:35:03.748    14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:35:03.748    14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:35:03.748    14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:35:03.748    14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:35:03.748    14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:35:03.748    14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:35:03.748    14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:35:03.748    14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:35:03.748    14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:35:03.748    14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:35:03.748   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:35:03.748   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:03.748   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:04.007  nvme0n1
00:35:04.007   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:04.007    14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:35:04.007    14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:35:04.007    14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:04.007    14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:04.007    14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:04.007   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:35:04.007   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:35:04.007   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:04.007   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:04.007   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:04.007   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:35:04.007   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1
00:35:04.007   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:35:04.007   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:35:04.008   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072
00:35:04.008   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:35:04.008   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJiM2Y4NGVhZDdkODQ1ZWQyMTFlYTY2YjA1OWJjNmEyZmRiZDJjZWJhZWI3MDk546rTjQ==:
00:35:04.008   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRlYjdiZGM3Y2RiZWE0ZjNhNWY0OWIxM2Y2NGZlODliMzAyYWE0MDk4YzBjZjM4vyDF+g==:
00:35:04.008   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:35:04.008   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072
00:35:04.008   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJiM2Y4NGVhZDdkODQ1ZWQyMTFlYTY2YjA1OWJjNmEyZmRiZDJjZWJhZWI3MDk546rTjQ==:
00:35:04.008   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRlYjdiZGM3Y2RiZWE0ZjNhNWY0OWIxM2Y2NGZlODliMzAyYWE0MDk4YzBjZjM4vyDF+g==: ]]
00:35:04.008   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRlYjdiZGM3Y2RiZWE0ZjNhNWY0OWIxM2Y2NGZlODliMzAyYWE0MDk4YzBjZjM4vyDF+g==:
00:35:04.008   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1
00:35:04.008   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:35:04.008   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:35:04.008   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072
00:35:04.008   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1
00:35:04.008   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:35:04.008   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072
00:35:04.008   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:04.008   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:04.008   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:04.008    14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:35:04.008    14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:35:04.008    14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:35:04.008    14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:35:04.008    14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:35:04.008    14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:35:04.008    14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:35:04.008    14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:35:04.008    14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:35:04.008    14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:35:04.008    14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:35:04.008   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:35:04.008   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:04.008   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:04.267  nvme0n1
00:35:04.267   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:04.267    14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:35:04.267    14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:35:04.267    14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:04.267    14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:04.267    14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:04.267   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:35:04.267   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:35:04.267   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:04.267   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:04.267   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:04.267   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:35:04.267   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2
00:35:04.267   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:35:04.267   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:35:04.267   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072
00:35:04.267   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:35:04.267   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmEwNzdiMTE4ODhiZmY1YzZjZjNkNDk1NzJiMjMxNje/HjlO:
00:35:04.267   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2ZjYzRiYmY0MzVjZDBkNmZkN2QxNjA3ZDgxYmQwZWQgUIzh:
00:35:04.267   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:35:04.267   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072
00:35:04.267   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmEwNzdiMTE4ODhiZmY1YzZjZjNkNDk1NzJiMjMxNje/HjlO:
00:35:04.267   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2ZjYzRiYmY0MzVjZDBkNmZkN2QxNjA3ZDgxYmQwZWQgUIzh: ]]
00:35:04.267   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2ZjYzRiYmY0MzVjZDBkNmZkN2QxNjA3ZDgxYmQwZWQgUIzh:
00:35:04.267   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2
00:35:04.267   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:35:04.267   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:35:04.267   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072
00:35:04.267   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2
00:35:04.267   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:35:04.267   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072
00:35:04.267   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:04.267   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:04.267   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:04.267    14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:35:04.267    14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:35:04.267    14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:35:04.267    14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:35:04.267    14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:35:04.267    14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:35:04.267    14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:35:04.267    14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:35:04.268    14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:35:04.268    14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:35:04.268    14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:35:04.268   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:35:04.268   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:04.268   14:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:04.527  nvme0n1
00:35:04.527   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:04.527    14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:35:04.527    14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:35:04.527    14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:04.527    14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:04.527    14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:04.527   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:35:04.527   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:35:04.527   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:04.527   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:04.786   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:04.786   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:35:04.786   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3
00:35:04.786   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:35:04.786   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:35:04.786   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072
00:35:04.786   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3
00:35:04.786   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzMxMTViZGUxYWYxM2NmOTFlZDgxNzY4NjVlNDM0MTk3NmY4NTczZWY4ZWFhNzkx31Mvig==:
00:35:04.786   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmI5NGYxN2Q4OWVhYzcwNzJiMDVlOWJiNTVlZTJiMzUijv8R:
00:35:04.786   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:35:04.786   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072
00:35:04.786   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzMxMTViZGUxYWYxM2NmOTFlZDgxNzY4NjVlNDM0MTk3NmY4NTczZWY4ZWFhNzkx31Mvig==:
00:35:04.786   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmI5NGYxN2Q4OWVhYzcwNzJiMDVlOWJiNTVlZTJiMzUijv8R: ]]
00:35:04.786   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmI5NGYxN2Q4OWVhYzcwNzJiMDVlOWJiNTVlZTJiMzUijv8R:
00:35:04.786   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3
00:35:04.786   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:35:04.786   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:35:04.786   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072
00:35:04.786   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3
00:35:04.786   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:35:04.786   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072
00:35:04.786   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:04.786   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:04.786   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:04.786    14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:35:04.786    14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:35:04.786    14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:35:04.786    14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:35:04.786    14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:35:04.786    14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:35:04.786    14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:35:04.786    14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:35:04.786    14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:35:04.786    14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:35:04.786    14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:35:04.786   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3
00:35:04.786   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:04.786   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:05.046  nvme0n1
00:35:05.046   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:05.046    14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:35:05.046    14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:35:05.046    14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:05.046    14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:05.046    14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:05.046   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:35:05.046   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:35:05.046   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:05.046   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:05.046   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:05.046   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:35:05.046   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4
00:35:05.046   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:35:05.046   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:35:05.046   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072
00:35:05.046   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4
00:35:05.046   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDg2MjlkNDQ4YTI3ZjgxNDkxOTVhZGVhOGQzNmFhNDNhZDA5NzQ5YWY1NGRiOTBhODFhOTc3NzM0NTVhZDE3NRHo+WI=:
00:35:05.046   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=
00:35:05.046   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:35:05.046   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072
00:35:05.046   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDg2MjlkNDQ4YTI3ZjgxNDkxOTVhZGVhOGQzNmFhNDNhZDA5NzQ5YWY1NGRiOTBhODFhOTc3NzM0NTVhZDE3NRHo+WI=:
00:35:05.046   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]]
00:35:05.046   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4
00:35:05.046   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:35:05.046   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:35:05.046   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072
00:35:05.046   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4
00:35:05.046   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:35:05.046   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072
00:35:05.046   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:05.046   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:05.046   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:05.046    14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:35:05.046    14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:35:05.046    14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:35:05.046    14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:35:05.046    14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:35:05.046    14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:35:05.046    14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:35:05.046    14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:35:05.046    14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:35:05.046    14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:35:05.046    14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:35:05.046   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4
00:35:05.046   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:05.046   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:05.306  nvme0n1
00:35:05.306   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:05.306    14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:35:05.306    14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:35:05.306    14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:05.306    14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:05.306    14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:05.306   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:35:05.306   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:35:05.306   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:05.306   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:05.306   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:05.306   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}"
00:35:05.306   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:35:05.306   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0
00:35:05.306   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:35:05.306   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:35:05.306   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096
00:35:05.306   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0
00:35:05.306   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjE2NTU2ZjIxMmNlZjZmYjYyNDVjYTcxMWE4ZmIzMGMjU2T/:
00:35:05.306   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjM1NWM3Zjk3Zjg5ODU1NDQ5MWQ1MGMyNmQ5ZmNlNDNhZDMxZjZmZjczYzUyMTVlYTQzMmQ1MzE4ZWI4YzJiOQZiTRM=:
00:35:05.306   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:35:05.306   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096
00:35:05.306   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjE2NTU2ZjIxMmNlZjZmYjYyNDVjYTcxMWE4ZmIzMGMjU2T/:
00:35:05.306   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjM1NWM3Zjk3Zjg5ODU1NDQ5MWQ1MGMyNmQ5ZmNlNDNhZDMxZjZmZjczYzUyMTVlYTQzMmQ1MzE4ZWI4YzJiOQZiTRM=: ]]
00:35:05.306   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjM1NWM3Zjk3Zjg5ODU1NDQ5MWQ1MGMyNmQ5ZmNlNDNhZDMxZjZmZjczYzUyMTVlYTQzMmQ1MzE4ZWI4YzJiOQZiTRM=:
00:35:05.306   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0
00:35:05.306   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:35:05.306   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:35:05.306   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096
00:35:05.306   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0
00:35:05.306   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:35:05.306   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096
00:35:05.306   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:05.306   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:05.306   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:05.306    14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:35:05.306    14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:35:05.306    14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:35:05.306    14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:35:05.306    14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:35:05.306    14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:35:05.306    14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:35:05.306    14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:35:05.306    14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:35:05.306    14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:35:05.306    14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:35:05.306   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:35:05.306   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:05.306   14:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:05.567  nvme0n1
00:35:05.567   14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:05.567    14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:35:05.567    14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:35:05.567    14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:05.567    14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:05.567    14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:05.828   14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:35:05.828   14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:35:05.828   14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:05.828   14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:05.828   14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:05.828   14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:35:05.828   14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1
00:35:05.828   14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:35:05.828   14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:35:05.828   14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096
00:35:05.828   14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:35:05.828   14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJiM2Y4NGVhZDdkODQ1ZWQyMTFlYTY2YjA1OWJjNmEyZmRiZDJjZWJhZWI3MDk546rTjQ==:
00:35:05.828   14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRlYjdiZGM3Y2RiZWE0ZjNhNWY0OWIxM2Y2NGZlODliMzAyYWE0MDk4YzBjZjM4vyDF+g==:
00:35:05.828   14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:35:05.828   14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096
00:35:05.828   14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJiM2Y4NGVhZDdkODQ1ZWQyMTFlYTY2YjA1OWJjNmEyZmRiZDJjZWJhZWI3MDk546rTjQ==:
00:35:05.828   14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRlYjdiZGM3Y2RiZWE0ZjNhNWY0OWIxM2Y2NGZlODliMzAyYWE0MDk4YzBjZjM4vyDF+g==: ]]
00:35:05.828   14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRlYjdiZGM3Y2RiZWE0ZjNhNWY0OWIxM2Y2NGZlODliMzAyYWE0MDk4YzBjZjM4vyDF+g==:
00:35:05.828   14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1
00:35:05.828   14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:35:05.828   14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:35:05.828   14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096
00:35:05.828   14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1
00:35:05.828   14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:35:05.828   14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096
00:35:05.828   14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:05.828   14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:05.828   14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:05.828    14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:35:05.828    14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:35:05.829    14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:35:05.829    14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:35:05.829    14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:35:05.829    14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:35:05.829    14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:35:05.829    14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:35:05.829    14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:35:05.829    14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:35:05.829    14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:35:05.829   14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:35:05.829   14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:05.829   14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:06.088  nvme0n1
00:35:06.088   14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:06.088    14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:35:06.088    14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:06.088    14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:35:06.088    14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:06.088    14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:06.088   14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:35:06.088   14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:35:06.088   14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:06.088   14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:06.088   14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:06.088   14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:35:06.088   14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2
00:35:06.088   14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:35:06.088   14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:35:06.088   14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096
00:35:06.088   14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:35:06.088   14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmEwNzdiMTE4ODhiZmY1YzZjZjNkNDk1NzJiMjMxNje/HjlO:
00:35:06.088   14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2ZjYzRiYmY0MzVjZDBkNmZkN2QxNjA3ZDgxYmQwZWQgUIzh:
00:35:06.088   14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:35:06.088   14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096
00:35:06.088   14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmEwNzdiMTE4ODhiZmY1YzZjZjNkNDk1NzJiMjMxNje/HjlO:
00:35:06.088   14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2ZjYzRiYmY0MzVjZDBkNmZkN2QxNjA3ZDgxYmQwZWQgUIzh: ]]
00:35:06.088   14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2ZjYzRiYmY0MzVjZDBkNmZkN2QxNjA3ZDgxYmQwZWQgUIzh:
00:35:06.088   14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2
00:35:06.088   14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:35:06.088   14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:35:06.088   14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096
00:35:06.088   14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2
00:35:06.088   14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:35:06.088   14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096
00:35:06.088   14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:06.088   14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:06.088   14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:06.088    14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:35:06.088    14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:35:06.088    14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:35:06.088    14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:35:06.088    14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:35:06.088    14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:35:06.088    14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:35:06.088    14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:35:06.088    14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:35:06.088    14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:35:06.088    14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:35:06.088   14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:35:06.088   14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:06.088   14:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:06.656  nvme0n1
00:35:06.656   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:06.656    14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:35:06.656    14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:35:06.656    14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:06.656    14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:06.656    14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:06.656   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:35:06.656   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:35:06.656   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:06.656   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:06.656   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:06.656   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:35:06.656   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3
00:35:06.656   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:35:06.656   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:35:06.656   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096
00:35:06.656   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3
00:35:06.656   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzMxMTViZGUxYWYxM2NmOTFlZDgxNzY4NjVlNDM0MTk3NmY4NTczZWY4ZWFhNzkx31Mvig==:
00:35:06.656   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmI5NGYxN2Q4OWVhYzcwNzJiMDVlOWJiNTVlZTJiMzUijv8R:
00:35:06.656   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:35:06.656   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096
00:35:06.656   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzMxMTViZGUxYWYxM2NmOTFlZDgxNzY4NjVlNDM0MTk3NmY4NTczZWY4ZWFhNzkx31Mvig==:
00:35:06.656   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmI5NGYxN2Q4OWVhYzcwNzJiMDVlOWJiNTVlZTJiMzUijv8R: ]]
00:35:06.656   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmI5NGYxN2Q4OWVhYzcwNzJiMDVlOWJiNTVlZTJiMzUijv8R:
00:35:06.656   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3
00:35:06.656   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:35:06.656   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:35:06.656   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096
00:35:06.656   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3
00:35:06.656   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:35:06.656   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096
00:35:06.656   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:06.656   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:06.656   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:06.656    14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:35:06.656    14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:35:06.656    14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:35:06.656    14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:35:06.656    14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:35:06.656    14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:35:06.656    14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:35:06.657    14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:35:06.657    14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:35:06.657    14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:35:06.657    14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:35:06.657   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3
00:35:06.657   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:06.657   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:06.916  nvme0n1
00:35:06.916   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:06.916    14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:35:06.916    14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:35:06.916    14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:06.916    14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:06.916    14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:06.916   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:35:06.916   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:35:06.916   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:06.916   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:06.916   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:06.916   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:35:06.916   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4
00:35:06.916   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:35:06.916   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:35:06.916   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096
00:35:06.916   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4
00:35:06.916   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDg2MjlkNDQ4YTI3ZjgxNDkxOTVhZGVhOGQzNmFhNDNhZDA5NzQ5YWY1NGRiOTBhODFhOTc3NzM0NTVhZDE3NRHo+WI=:
00:35:06.916   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=
00:35:06.916   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:35:06.916   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096
00:35:06.916   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDg2MjlkNDQ4YTI3ZjgxNDkxOTVhZGVhOGQzNmFhNDNhZDA5NzQ5YWY1NGRiOTBhODFhOTc3NzM0NTVhZDE3NRHo+WI=:
00:35:06.916   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]]
00:35:06.916   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4
00:35:06.916   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:35:06.916   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:35:06.916   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096
00:35:06.916   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4
00:35:06.916   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:35:06.916   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096
00:35:06.916   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:06.916   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:06.916   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:06.916    14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:35:06.916    14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:35:06.916    14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:35:06.916    14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:35:06.916    14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:35:06.916    14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:35:06.916    14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:35:06.916    14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:35:06.916    14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:35:06.916    14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:35:06.916    14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:35:06.916   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4
00:35:06.916   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:06.916   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:07.175  nvme0n1
00:35:07.175   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:07.175    14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:35:07.175    14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:07.175    14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:07.175    14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:35:07.175    14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:07.435   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:35:07.435   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:35:07.435   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:07.435   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:07.435   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:07.435   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}"
00:35:07.435   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:35:07.435   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0
00:35:07.435   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:35:07.435   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:35:07.435   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144
00:35:07.435   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0
00:35:07.435   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjE2NTU2ZjIxMmNlZjZmYjYyNDVjYTcxMWE4ZmIzMGMjU2T/:
00:35:07.435   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjM1NWM3Zjk3Zjg5ODU1NDQ5MWQ1MGMyNmQ5ZmNlNDNhZDMxZjZmZjczYzUyMTVlYTQzMmQ1MzE4ZWI4YzJiOQZiTRM=:
00:35:07.435   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:35:07.435   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144
00:35:07.435   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjE2NTU2ZjIxMmNlZjZmYjYyNDVjYTcxMWE4ZmIzMGMjU2T/:
00:35:07.435   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjM1NWM3Zjk3Zjg5ODU1NDQ5MWQ1MGMyNmQ5ZmNlNDNhZDMxZjZmZjczYzUyMTVlYTQzMmQ1MzE4ZWI4YzJiOQZiTRM=: ]]
00:35:07.435   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjM1NWM3Zjk3Zjg5ODU1NDQ5MWQ1MGMyNmQ5ZmNlNDNhZDMxZjZmZjczYzUyMTVlYTQzMmQ1MzE4ZWI4YzJiOQZiTRM=:
00:35:07.435   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0
00:35:07.435   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:35:07.435   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:35:07.435   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144
00:35:07.435   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0
00:35:07.435   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:35:07.435   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144
00:35:07.435   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:07.435   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:07.435   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:07.435    14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:35:07.435    14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:35:07.435    14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:35:07.435    14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:35:07.435    14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:35:07.435    14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:35:07.435    14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:35:07.435    14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:35:07.435    14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:35:07.435    14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:35:07.435    14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:35:07.435   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:35:07.435   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:07.435   14:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:07.693  nvme0n1
00:35:07.693   14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:07.693    14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:35:07.693    14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:35:07.693    14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:07.693    14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:07.693    14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:07.693   14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:35:07.693   14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:35:07.693   14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:07.694   14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:07.951   14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:07.951   14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:35:07.951   14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1
00:35:07.951   14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:35:07.951   14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:35:07.951   14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144
00:35:07.951   14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:35:07.951   14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJiM2Y4NGVhZDdkODQ1ZWQyMTFlYTY2YjA1OWJjNmEyZmRiZDJjZWJhZWI3MDk546rTjQ==:
00:35:07.951   14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRlYjdiZGM3Y2RiZWE0ZjNhNWY0OWIxM2Y2NGZlODliMzAyYWE0MDk4YzBjZjM4vyDF+g==:
00:35:07.951   14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:35:07.951   14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144
00:35:07.951   14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJiM2Y4NGVhZDdkODQ1ZWQyMTFlYTY2YjA1OWJjNmEyZmRiZDJjZWJhZWI3MDk546rTjQ==:
00:35:07.952   14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRlYjdiZGM3Y2RiZWE0ZjNhNWY0OWIxM2Y2NGZlODliMzAyYWE0MDk4YzBjZjM4vyDF+g==: ]]
00:35:07.952   14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRlYjdiZGM3Y2RiZWE0ZjNhNWY0OWIxM2Y2NGZlODliMzAyYWE0MDk4YzBjZjM4vyDF+g==:
00:35:07.952   14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1
00:35:07.952   14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:35:07.952   14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:35:07.952   14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144
00:35:07.952   14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1
00:35:07.952   14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:35:07.952   14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144
00:35:07.952   14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:07.952   14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:07.952   14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:07.952    14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:35:07.952    14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:35:07.952    14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:35:07.952    14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:35:07.952    14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:35:07.952    14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:35:07.952    14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:35:07.952    14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:35:07.952    14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:35:07.952    14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:35:07.952    14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:35:07.952   14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:35:07.952   14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:07.952   14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:08.209  nvme0n1
00:35:08.209   14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:08.209    14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:35:08.209    14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:35:08.209    14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:08.209    14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:08.209    14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:08.209   14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:35:08.209   14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:35:08.209   14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:08.209   14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:08.468   14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:08.468   14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:35:08.468   14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2
00:35:08.468   14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:35:08.468   14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:35:08.468   14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144
00:35:08.468   14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:35:08.468   14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmEwNzdiMTE4ODhiZmY1YzZjZjNkNDk1NzJiMjMxNje/HjlO:
00:35:08.468   14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2ZjYzRiYmY0MzVjZDBkNmZkN2QxNjA3ZDgxYmQwZWQgUIzh:
00:35:08.468   14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:35:08.468   14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144
00:35:08.469   14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmEwNzdiMTE4ODhiZmY1YzZjZjNkNDk1NzJiMjMxNje/HjlO:
00:35:08.469   14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2ZjYzRiYmY0MzVjZDBkNmZkN2QxNjA3ZDgxYmQwZWQgUIzh: ]]
00:35:08.469   14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2ZjYzRiYmY0MzVjZDBkNmZkN2QxNjA3ZDgxYmQwZWQgUIzh:
00:35:08.469   14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2
00:35:08.469   14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:35:08.469   14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:35:08.469   14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144
00:35:08.469   14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2
00:35:08.469   14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:35:08.469   14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144
00:35:08.469   14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:08.469   14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:08.469   14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:08.469    14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:35:08.469    14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:35:08.469    14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:35:08.469    14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:35:08.469    14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:35:08.469    14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:35:08.469    14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:35:08.469    14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:35:08.469    14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:35:08.469    14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:35:08.469    14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:35:08.469   14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:35:08.469   14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:08.469   14:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:08.728  nvme0n1
00:35:08.728   14:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:08.728    14:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:35:08.728    14:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:08.728    14:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:35:08.728    14:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:08.728    14:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:08.986   14:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:35:08.986   14:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:35:08.986   14:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:08.986   14:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:08.986   14:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:08.986   14:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:35:08.986   14:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3
00:35:08.986   14:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:35:08.986   14:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:35:08.986   14:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144
00:35:08.986   14:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3
00:35:08.987   14:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzMxMTViZGUxYWYxM2NmOTFlZDgxNzY4NjVlNDM0MTk3NmY4NTczZWY4ZWFhNzkx31Mvig==:
00:35:08.987   14:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmI5NGYxN2Q4OWVhYzcwNzJiMDVlOWJiNTVlZTJiMzUijv8R:
00:35:08.987   14:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:35:08.987   14:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144
00:35:08.987   14:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzMxMTViZGUxYWYxM2NmOTFlZDgxNzY4NjVlNDM0MTk3NmY4NTczZWY4ZWFhNzkx31Mvig==:
00:35:08.987   14:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmI5NGYxN2Q4OWVhYzcwNzJiMDVlOWJiNTVlZTJiMzUijv8R: ]]
00:35:08.987   14:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmI5NGYxN2Q4OWVhYzcwNzJiMDVlOWJiNTVlZTJiMzUijv8R:
00:35:08.987   14:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3
00:35:08.987   14:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:35:08.987   14:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:35:08.987   14:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144
00:35:08.987   14:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3
00:35:08.987   14:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:35:08.987   14:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144
00:35:08.987   14:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:08.987   14:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:08.987   14:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:08.987    14:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:35:08.987    14:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:35:08.987    14:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:35:08.987    14:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:35:08.987    14:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:35:08.987    14:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:35:08.987    14:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:35:08.987    14:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:35:08.987    14:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:35:08.987    14:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:35:08.987    14:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:35:08.987   14:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3
00:35:08.987   14:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:08.987   14:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:09.246  nvme0n1
00:35:09.246   14:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:09.246    14:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:35:09.246    14:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:35:09.246    14:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:09.246    14:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:09.246    14:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:09.505   14:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:35:09.505   14:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:35:09.505   14:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:09.505   14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:09.505   14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:09.505   14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:35:09.505   14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4
00:35:09.505   14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:35:09.505   14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:35:09.505   14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144
00:35:09.505   14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4
00:35:09.505   14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDg2MjlkNDQ4YTI3ZjgxNDkxOTVhZGVhOGQzNmFhNDNhZDA5NzQ5YWY1NGRiOTBhODFhOTc3NzM0NTVhZDE3NRHo+WI=:
00:35:09.505   14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=
00:35:09.505   14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:35:09.505   14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144
00:35:09.505   14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDg2MjlkNDQ4YTI3ZjgxNDkxOTVhZGVhOGQzNmFhNDNhZDA5NzQ5YWY1NGRiOTBhODFhOTc3NzM0NTVhZDE3NRHo+WI=:
00:35:09.505   14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]]
00:35:09.506   14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4
00:35:09.506   14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:35:09.506   14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:35:09.506   14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144
00:35:09.506   14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4
00:35:09.506   14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:35:09.506   14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144
00:35:09.506   14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:09.506   14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:09.506   14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:09.506    14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:35:09.506    14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:35:09.506    14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:35:09.506    14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:35:09.506    14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:35:09.506    14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:35:09.506    14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:35:09.506    14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:35:09.506    14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:35:09.506    14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:35:09.506    14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:35:09.506   14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4
00:35:09.506   14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:09.506   14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:09.764  nvme0n1
00:35:09.764   14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:09.764    14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:35:09.765    14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:35:09.765    14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:09.765    14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:09.765    14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:10.023   14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:35:10.023   14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:35:10.023   14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:10.023   14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:10.023   14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:10.023   14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}"
00:35:10.024   14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:35:10.024   14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0
00:35:10.024   14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:35:10.024   14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:35:10.024   14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192
00:35:10.024   14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0
00:35:10.024   14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjE2NTU2ZjIxMmNlZjZmYjYyNDVjYTcxMWE4ZmIzMGMjU2T/:
00:35:10.024   14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjM1NWM3Zjk3Zjg5ODU1NDQ5MWQ1MGMyNmQ5ZmNlNDNhZDMxZjZmZjczYzUyMTVlYTQzMmQ1MzE4ZWI4YzJiOQZiTRM=:
00:35:10.024   14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:35:10.024   14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192
00:35:10.024   14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjE2NTU2ZjIxMmNlZjZmYjYyNDVjYTcxMWE4ZmIzMGMjU2T/:
00:35:10.024   14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjM1NWM3Zjk3Zjg5ODU1NDQ5MWQ1MGMyNmQ5ZmNlNDNhZDMxZjZmZjczYzUyMTVlYTQzMmQ1MzE4ZWI4YzJiOQZiTRM=: ]]
00:35:10.024   14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjM1NWM3Zjk3Zjg5ODU1NDQ5MWQ1MGMyNmQ5ZmNlNDNhZDMxZjZmZjczYzUyMTVlYTQzMmQ1MzE4ZWI4YzJiOQZiTRM=:
00:35:10.024   14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0
00:35:10.024   14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:35:10.024   14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:35:10.024   14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192
00:35:10.024   14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0
00:35:10.024   14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:35:10.024   14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192
00:35:10.024   14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:10.024   14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:10.024   14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:10.024    14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:35:10.024    14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:35:10.024    14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:35:10.024    14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:35:10.024    14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:35:10.024    14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:35:10.024    14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:35:10.024    14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:35:10.024    14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:35:10.024    14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:35:10.024    14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:35:10.024   14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:35:10.024   14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:10.024   14:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:10.593  nvme0n1
00:35:10.593   14:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:10.593    14:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:35:10.593    14:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:35:10.593    14:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:10.593    14:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:10.593    14:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:10.593   14:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:35:10.593   14:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:35:10.593   14:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:10.593   14:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:10.593   14:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:10.593   14:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:35:10.593   14:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1
00:35:10.593   14:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:35:10.593   14:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:35:10.593   14:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192
00:35:10.593   14:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:35:10.593   14:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJiM2Y4NGVhZDdkODQ1ZWQyMTFlYTY2YjA1OWJjNmEyZmRiZDJjZWJhZWI3MDk546rTjQ==:
00:35:10.593   14:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRlYjdiZGM3Y2RiZWE0ZjNhNWY0OWIxM2Y2NGZlODliMzAyYWE0MDk4YzBjZjM4vyDF+g==:
00:35:10.593   14:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:35:10.593   14:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192
00:35:10.593   14:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJiM2Y4NGVhZDdkODQ1ZWQyMTFlYTY2YjA1OWJjNmEyZmRiZDJjZWJhZWI3MDk546rTjQ==:
00:35:10.593   14:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRlYjdiZGM3Y2RiZWE0ZjNhNWY0OWIxM2Y2NGZlODliMzAyYWE0MDk4YzBjZjM4vyDF+g==: ]]
00:35:10.593   14:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRlYjdiZGM3Y2RiZWE0ZjNhNWY0OWIxM2Y2NGZlODliMzAyYWE0MDk4YzBjZjM4vyDF+g==:
00:35:10.593   14:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1
00:35:10.593   14:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:35:10.593   14:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:35:10.593   14:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192
00:35:10.593   14:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1
00:35:10.593   14:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:35:10.593   14:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192
00:35:10.593   14:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:10.593   14:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:10.593   14:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:10.593    14:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:35:10.593    14:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:35:10.593    14:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:35:10.593    14:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:35:10.593    14:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:35:10.593    14:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:35:10.593    14:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:35:10.593    14:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:35:10.593    14:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:35:10.593    14:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:35:10.593    14:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:35:10.593   14:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:35:10.593   14:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:10.593   14:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:11.221  nvme0n1
00:35:11.221   14:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:11.221    14:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:35:11.221    14:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:11.221    14:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:35:11.221    14:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:11.221    14:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:11.493   14:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:35:11.493   14:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:35:11.493   14:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:11.493   14:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:11.493   14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:11.493   14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:35:11.493   14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2
00:35:11.493   14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:35:11.493   14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:35:11.493   14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192
00:35:11.493   14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:35:11.493   14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmEwNzdiMTE4ODhiZmY1YzZjZjNkNDk1NzJiMjMxNje/HjlO:
00:35:11.493   14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2ZjYzRiYmY0MzVjZDBkNmZkN2QxNjA3ZDgxYmQwZWQgUIzh:
00:35:11.493   14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:35:11.493   14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192
00:35:11.493   14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmEwNzdiMTE4ODhiZmY1YzZjZjNkNDk1NzJiMjMxNje/HjlO:
00:35:11.493   14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2ZjYzRiYmY0MzVjZDBkNmZkN2QxNjA3ZDgxYmQwZWQgUIzh: ]]
00:35:11.493   14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2ZjYzRiYmY0MzVjZDBkNmZkN2QxNjA3ZDgxYmQwZWQgUIzh:
00:35:11.493   14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2
00:35:11.493   14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:35:11.493   14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:35:11.493   14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192
00:35:11.493   14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2
00:35:11.493   14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:35:11.493   14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192
00:35:11.493   14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:11.493   14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:11.493   14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:11.493    14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:35:11.493    14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:35:11.493    14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:35:11.493    14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:35:11.493    14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:35:11.493    14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:35:11.493    14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:35:11.493    14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:35:11.493    14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:35:11.493    14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:35:11.493    14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:35:11.493   14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:35:11.494   14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:11.494   14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:12.062  nvme0n1
00:35:12.062   14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:12.062    14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:35:12.062    14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:35:12.062    14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:12.062    14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:12.062    14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:12.062   14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:35:12.062   14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:35:12.062   14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:12.062   14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:12.062   14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:12.062   14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:35:12.062   14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3
00:35:12.062   14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:35:12.062   14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:35:12.062   14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192
00:35:12.062   14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3
00:35:12.062   14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzMxMTViZGUxYWYxM2NmOTFlZDgxNzY4NjVlNDM0MTk3NmY4NTczZWY4ZWFhNzkx31Mvig==:
00:35:12.062   14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmI5NGYxN2Q4OWVhYzcwNzJiMDVlOWJiNTVlZTJiMzUijv8R:
00:35:12.062   14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:35:12.062   14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192
00:35:12.062   14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzMxMTViZGUxYWYxM2NmOTFlZDgxNzY4NjVlNDM0MTk3NmY4NTczZWY4ZWFhNzkx31Mvig==:
00:35:12.062   14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmI5NGYxN2Q4OWVhYzcwNzJiMDVlOWJiNTVlZTJiMzUijv8R: ]]
00:35:12.062   14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmI5NGYxN2Q4OWVhYzcwNzJiMDVlOWJiNTVlZTJiMzUijv8R:
00:35:12.062   14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3
00:35:12.062   14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:35:12.062   14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:35:12.062   14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192
00:35:12.062   14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3
00:35:12.062   14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:35:12.062   14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192
00:35:12.062   14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:12.062   14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:12.062   14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:12.062    14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:35:12.062    14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:35:12.062    14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:35:12.062    14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:35:12.062    14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:35:12.062    14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:35:12.062    14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:35:12.062    14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:35:12.062    14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:35:12.062    14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:35:12.062    14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:35:12.062   14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3
00:35:12.062   14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:12.062   14:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:12.630  nvme0n1
00:35:12.630   14:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:12.630    14:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:35:12.630    14:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:35:12.630    14:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:12.630    14:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:12.630    14:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:12.890   14:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:35:12.890   14:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:35:12.890   14:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:12.890   14:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:12.890   14:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:12.890   14:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:35:12.890   14:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4
00:35:12.890   14:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:35:12.890   14:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:35:12.890   14:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192
00:35:12.890   14:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4
00:35:12.890   14:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDg2MjlkNDQ4YTI3ZjgxNDkxOTVhZGVhOGQzNmFhNDNhZDA5NzQ5YWY1NGRiOTBhODFhOTc3NzM0NTVhZDE3NRHo+WI=:
00:35:12.890   14:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=
00:35:12.890   14:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:35:12.890   14:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192
00:35:12.890   14:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDg2MjlkNDQ4YTI3ZjgxNDkxOTVhZGVhOGQzNmFhNDNhZDA5NzQ5YWY1NGRiOTBhODFhOTc3NzM0NTVhZDE3NRHo+WI=:
00:35:12.890   14:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]]
00:35:12.890   14:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4
00:35:12.890   14:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:35:12.890   14:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:35:12.890   14:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192
00:35:12.890   14:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4
00:35:12.890   14:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:35:12.890   14:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192
00:35:12.890   14:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:12.890   14:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:12.890   14:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:12.890    14:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:35:12.890    14:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:35:12.890    14:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:35:12.890    14:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:35:12.890    14:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:35:12.890    14:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:35:12.890    14:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:35:12.890    14:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:35:12.890    14:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:35:12.890    14:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:35:12.890    14:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:35:12.890   14:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4
00:35:12.890   14:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:12.890   14:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:13.459  nvme0n1
00:35:13.459   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:13.459    14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:35:13.459    14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:35:13.459    14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:13.459    14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:13.459    14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:13.459   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:35:13.459   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:35:13.459   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:13.459   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:13.459   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:13.459   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}"
00:35:13.459   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}"
00:35:13.459   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:35:13.459   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0
00:35:13.459   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:35:13.459   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:35:13.459   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:35:13.459   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0
00:35:13.459   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjE2NTU2ZjIxMmNlZjZmYjYyNDVjYTcxMWE4ZmIzMGMjU2T/:
00:35:13.459   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjM1NWM3Zjk3Zjg5ODU1NDQ5MWQ1MGMyNmQ5ZmNlNDNhZDMxZjZmZjczYzUyMTVlYTQzMmQ1MzE4ZWI4YzJiOQZiTRM=:
00:35:13.459   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:35:13.459   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:35:13.459   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjE2NTU2ZjIxMmNlZjZmYjYyNDVjYTcxMWE4ZmIzMGMjU2T/:
00:35:13.459   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjM1NWM3Zjk3Zjg5ODU1NDQ5MWQ1MGMyNmQ5ZmNlNDNhZDMxZjZmZjczYzUyMTVlYTQzMmQ1MzE4ZWI4YzJiOQZiTRM=: ]]
00:35:13.459   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjM1NWM3Zjk3Zjg5ODU1NDQ5MWQ1MGMyNmQ5ZmNlNDNhZDMxZjZmZjczYzUyMTVlYTQzMmQ1MzE4ZWI4YzJiOQZiTRM=:
00:35:13.459   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0
00:35:13.459   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:35:13.459   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:35:13.459   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048
00:35:13.459   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0
00:35:13.459   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:35:13.459   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048
00:35:13.459   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:13.459   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:13.459   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:13.459    14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:35:13.459    14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:35:13.459    14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:35:13.459    14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:35:13.459    14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:35:13.459    14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:35:13.459    14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:35:13.459    14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:35:13.459    14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:35:13.459    14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:35:13.459    14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:35:13.459   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:35:13.459   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:13.459   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:13.718  nvme0n1
00:35:13.718   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:13.718    14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:35:13.718    14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:35:13.718    14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:13.718    14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:13.718    14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:13.718   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:35:13.718   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:35:13.718   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:13.718   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:13.718   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:13.718   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:35:13.718   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1
00:35:13.718   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:35:13.718   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:35:13.718   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:35:13.718   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:35:13.718   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJiM2Y4NGVhZDdkODQ1ZWQyMTFlYTY2YjA1OWJjNmEyZmRiZDJjZWJhZWI3MDk546rTjQ==:
00:35:13.718   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRlYjdiZGM3Y2RiZWE0ZjNhNWY0OWIxM2Y2NGZlODliMzAyYWE0MDk4YzBjZjM4vyDF+g==:
00:35:13.718   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:35:13.718   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:35:13.718   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJiM2Y4NGVhZDdkODQ1ZWQyMTFlYTY2YjA1OWJjNmEyZmRiZDJjZWJhZWI3MDk546rTjQ==:
00:35:13.718   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRlYjdiZGM3Y2RiZWE0ZjNhNWY0OWIxM2Y2NGZlODliMzAyYWE0MDk4YzBjZjM4vyDF+g==: ]]
00:35:13.718   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRlYjdiZGM3Y2RiZWE0ZjNhNWY0OWIxM2Y2NGZlODliMzAyYWE0MDk4YzBjZjM4vyDF+g==:
00:35:13.719   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1
00:35:13.719   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:35:13.719   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:35:13.719   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048
00:35:13.719   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1
00:35:13.719   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:35:13.719   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048
00:35:13.719   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:13.719   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:13.978   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:13.978    14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:35:13.978    14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:35:13.978    14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:35:13.978    14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:35:13.978    14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:35:13.978    14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:35:13.978    14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:35:13.978    14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:35:13.978    14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:35:13.978    14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:35:13.978    14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:35:13.978   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:35:13.978   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:13.978   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:13.978  nvme0n1
00:35:13.978   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:13.978    14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:35:13.978    14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:13.978    14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:13.978    14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:35:13.978    14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:13.978   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:35:13.978   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:35:13.978   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:13.978   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:14.237   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:14.237   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:35:14.237   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2
00:35:14.237   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:35:14.237   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:35:14.237   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:35:14.237   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:35:14.237   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmEwNzdiMTE4ODhiZmY1YzZjZjNkNDk1NzJiMjMxNje/HjlO:
00:35:14.237   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2ZjYzRiYmY0MzVjZDBkNmZkN2QxNjA3ZDgxYmQwZWQgUIzh:
00:35:14.237   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:35:14.237   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:35:14.237   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmEwNzdiMTE4ODhiZmY1YzZjZjNkNDk1NzJiMjMxNje/HjlO:
00:35:14.237   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2ZjYzRiYmY0MzVjZDBkNmZkN2QxNjA3ZDgxYmQwZWQgUIzh: ]]
00:35:14.237   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2ZjYzRiYmY0MzVjZDBkNmZkN2QxNjA3ZDgxYmQwZWQgUIzh:
00:35:14.237   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2
00:35:14.237   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:35:14.237   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:35:14.238   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048
00:35:14.238   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2
00:35:14.238   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:35:14.238   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048
00:35:14.238   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:14.238   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:14.238   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:14.238    14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:35:14.238    14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:35:14.238    14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:35:14.238    14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:35:14.238    14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:35:14.238    14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:35:14.238    14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:35:14.238    14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:35:14.238    14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:35:14.238    14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:35:14.238    14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:35:14.238   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:35:14.238   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:14.238   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:14.238  nvme0n1
00:35:14.238   14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:14.238    14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:35:14.238    14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:14.238    14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:14.238    14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:35:14.497    14:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:14.497   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:35:14.497   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:35:14.497   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:14.497   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:14.497   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:14.497   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:35:14.497   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3
00:35:14.497   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:35:14.497   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:35:14.497   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:35:14.497   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3
00:35:14.497   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzMxMTViZGUxYWYxM2NmOTFlZDgxNzY4NjVlNDM0MTk3NmY4NTczZWY4ZWFhNzkx31Mvig==:
00:35:14.497   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmI5NGYxN2Q4OWVhYzcwNzJiMDVlOWJiNTVlZTJiMzUijv8R:
00:35:14.497   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:35:14.497   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:35:14.497   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzMxMTViZGUxYWYxM2NmOTFlZDgxNzY4NjVlNDM0MTk3NmY4NTczZWY4ZWFhNzkx31Mvig==:
00:35:14.497   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmI5NGYxN2Q4OWVhYzcwNzJiMDVlOWJiNTVlZTJiMzUijv8R: ]]
00:35:14.497   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmI5NGYxN2Q4OWVhYzcwNzJiMDVlOWJiNTVlZTJiMzUijv8R:
00:35:14.497   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3
00:35:14.497   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:35:14.497   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:35:14.497   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048
00:35:14.497   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3
00:35:14.497   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:35:14.497   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048
00:35:14.497   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:14.497   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:14.497   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:14.497    14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:35:14.497    14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:35:14.497    14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:35:14.497    14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:35:14.497    14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:35:14.497    14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:35:14.497    14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:35:14.497    14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:35:14.497    14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:35:14.497    14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:35:14.498    14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:35:14.498   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3
00:35:14.498   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:14.498   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:14.757  nvme0n1
00:35:14.757   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:14.757    14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:35:14.757    14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:14.757    14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:14.757    14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:35:14.757    14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:14.757   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:35:14.757   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:35:14.757   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:14.757   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:14.757   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:14.757   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:35:14.757   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4
00:35:14.757   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:35:14.757   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:35:14.757   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:35:14.757   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4
00:35:14.757   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDg2MjlkNDQ4YTI3ZjgxNDkxOTVhZGVhOGQzNmFhNDNhZDA5NzQ5YWY1NGRiOTBhODFhOTc3NzM0NTVhZDE3NRHo+WI=:
00:35:14.757   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=
00:35:14.757   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:35:14.757   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:35:14.757   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDg2MjlkNDQ4YTI3ZjgxNDkxOTVhZGVhOGQzNmFhNDNhZDA5NzQ5YWY1NGRiOTBhODFhOTc3NzM0NTVhZDE3NRHo+WI=:
00:35:14.757   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]]
00:35:14.757   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4
00:35:14.757   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:35:14.757   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:35:14.757   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048
00:35:14.757   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4
00:35:14.757   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:35:14.757   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048
00:35:14.757   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:14.757   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:14.757   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:14.757    14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:35:14.757    14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:35:14.757    14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:35:14.757    14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:35:14.757    14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:35:14.757    14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:35:14.757    14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:35:14.757    14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:35:14.758    14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:35:14.758    14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:35:14.758    14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:35:14.758   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4
00:35:14.758   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:14.758   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:15.016  nvme0n1
00:35:15.016   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:15.016    14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:35:15.016    14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:15.016    14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:15.016    14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:35:15.016    14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:15.016   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:35:15.016   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:35:15.016   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:15.016   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:15.016   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:15.016   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}"
00:35:15.016   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:35:15.017   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0
00:35:15.017   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:35:15.017   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:35:15.017   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072
00:35:15.017   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0
00:35:15.017   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjE2NTU2ZjIxMmNlZjZmYjYyNDVjYTcxMWE4ZmIzMGMjU2T/:
00:35:15.017   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjM1NWM3Zjk3Zjg5ODU1NDQ5MWQ1MGMyNmQ5ZmNlNDNhZDMxZjZmZjczYzUyMTVlYTQzMmQ1MzE4ZWI4YzJiOQZiTRM=:
00:35:15.017   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:35:15.017   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072
00:35:15.017   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjE2NTU2ZjIxMmNlZjZmYjYyNDVjYTcxMWE4ZmIzMGMjU2T/:
00:35:15.017   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjM1NWM3Zjk3Zjg5ODU1NDQ5MWQ1MGMyNmQ5ZmNlNDNhZDMxZjZmZjczYzUyMTVlYTQzMmQ1MzE4ZWI4YzJiOQZiTRM=: ]]
00:35:15.017   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjM1NWM3Zjk3Zjg5ODU1NDQ5MWQ1MGMyNmQ5ZmNlNDNhZDMxZjZmZjczYzUyMTVlYTQzMmQ1MzE4ZWI4YzJiOQZiTRM=:
00:35:15.017   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0
00:35:15.017   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:35:15.017   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:35:15.017   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072
00:35:15.017   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0
00:35:15.017   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:35:15.017   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072
00:35:15.017   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:15.017   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:15.017   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:15.017    14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:35:15.017    14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:35:15.017    14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:35:15.017    14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:35:15.017    14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:35:15.017    14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:35:15.017    14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:35:15.017    14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:35:15.017    14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:35:15.017    14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:35:15.017    14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:35:15.017   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:35:15.017   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:15.017   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:15.275  nvme0n1
00:35:15.275   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:15.275    14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:35:15.275    14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:15.275    14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:35:15.275    14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:15.275    14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:15.275   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:35:15.275   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:35:15.275   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:15.275   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:15.275   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:15.275   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:35:15.275   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1
00:35:15.275   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:35:15.275   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:35:15.275   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072
00:35:15.275   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:35:15.275   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJiM2Y4NGVhZDdkODQ1ZWQyMTFlYTY2YjA1OWJjNmEyZmRiZDJjZWJhZWI3MDk546rTjQ==:
00:35:15.275   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRlYjdiZGM3Y2RiZWE0ZjNhNWY0OWIxM2Y2NGZlODliMzAyYWE0MDk4YzBjZjM4vyDF+g==:
00:35:15.275   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:35:15.275   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072
00:35:15.275   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJiM2Y4NGVhZDdkODQ1ZWQyMTFlYTY2YjA1OWJjNmEyZmRiZDJjZWJhZWI3MDk546rTjQ==:
00:35:15.275   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRlYjdiZGM3Y2RiZWE0ZjNhNWY0OWIxM2Y2NGZlODliMzAyYWE0MDk4YzBjZjM4vyDF+g==: ]]
00:35:15.275   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRlYjdiZGM3Y2RiZWE0ZjNhNWY0OWIxM2Y2NGZlODliMzAyYWE0MDk4YzBjZjM4vyDF+g==:
00:35:15.275   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1
00:35:15.276   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:35:15.276   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:35:15.276   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072
00:35:15.276   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1
00:35:15.276   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:35:15.276   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072
00:35:15.276   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:15.276   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:15.276   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:15.276    14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:35:15.276    14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:35:15.276    14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:35:15.276    14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:35:15.276    14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:35:15.276    14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:35:15.276    14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:35:15.276    14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:35:15.276    14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:35:15.276    14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:35:15.276    14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:35:15.276   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:35:15.276   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:15.276   14:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:15.533  nvme0n1
00:35:15.533   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:15.533    14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:35:15.533    14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:15.533    14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:15.533    14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:35:15.533    14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:15.533   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:35:15.533   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:35:15.533   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:15.533   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:15.533   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:15.533   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:35:15.533   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2
00:35:15.533   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:35:15.533   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:35:15.533   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072
00:35:15.533   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:35:15.791   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmEwNzdiMTE4ODhiZmY1YzZjZjNkNDk1NzJiMjMxNje/HjlO:
00:35:15.791   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2ZjYzRiYmY0MzVjZDBkNmZkN2QxNjA3ZDgxYmQwZWQgUIzh:
00:35:15.791   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:35:15.791   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072
00:35:15.791   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmEwNzdiMTE4ODhiZmY1YzZjZjNkNDk1NzJiMjMxNje/HjlO:
00:35:15.791   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2ZjYzRiYmY0MzVjZDBkNmZkN2QxNjA3ZDgxYmQwZWQgUIzh: ]]
00:35:15.791   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2ZjYzRiYmY0MzVjZDBkNmZkN2QxNjA3ZDgxYmQwZWQgUIzh:
00:35:15.791   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2
00:35:15.791   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:35:15.791   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:35:15.791   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072
00:35:15.791   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2
00:35:15.791   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:35:15.791   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072
00:35:15.791   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:15.791   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:15.791   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:15.791    14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:35:15.791    14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:35:15.791    14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:35:15.791    14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:35:15.791    14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:35:15.791    14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:35:15.791    14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:35:15.791    14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:35:15.791    14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:35:15.791    14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:35:15.791    14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:35:15.791   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:35:15.791   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:15.791   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:15.791  nvme0n1
00:35:15.791   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:15.791    14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:35:15.791    14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:15.791    14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:15.791    14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:35:16.051    14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:16.051   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:35:16.051   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:35:16.051   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:16.051   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:16.051   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:16.051   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:35:16.051   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3
00:35:16.051   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:35:16.051   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:35:16.051   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072
00:35:16.051   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3
00:35:16.051   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzMxMTViZGUxYWYxM2NmOTFlZDgxNzY4NjVlNDM0MTk3NmY4NTczZWY4ZWFhNzkx31Mvig==:
00:35:16.051   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmI5NGYxN2Q4OWVhYzcwNzJiMDVlOWJiNTVlZTJiMzUijv8R:
00:35:16.051   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:35:16.051   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072
00:35:16.051   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzMxMTViZGUxYWYxM2NmOTFlZDgxNzY4NjVlNDM0MTk3NmY4NTczZWY4ZWFhNzkx31Mvig==:
00:35:16.051   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmI5NGYxN2Q4OWVhYzcwNzJiMDVlOWJiNTVlZTJiMzUijv8R: ]]
00:35:16.051   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmI5NGYxN2Q4OWVhYzcwNzJiMDVlOWJiNTVlZTJiMzUijv8R:
00:35:16.051   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3
00:35:16.051   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:35:16.051   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:35:16.051   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072
00:35:16.051   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3
00:35:16.051   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:35:16.051   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072
00:35:16.051   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:16.051   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:16.051   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:16.051    14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:35:16.051    14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:35:16.051    14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:35:16.051    14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:35:16.051    14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:35:16.051    14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:35:16.051    14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:35:16.051    14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:35:16.051    14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:35:16.051    14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:35:16.051    14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:35:16.051   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3
00:35:16.051   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:16.051   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:16.310  nvme0n1
00:35:16.310   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:16.310    14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:35:16.310    14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:16.311    14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:16.311    14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:35:16.311    14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:16.311   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:35:16.311   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:35:16.311   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:16.311   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:16.311   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:16.311   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:35:16.311   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4
00:35:16.311   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:35:16.311   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:35:16.311   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072
00:35:16.311   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4
00:35:16.311   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDg2MjlkNDQ4YTI3ZjgxNDkxOTVhZGVhOGQzNmFhNDNhZDA5NzQ5YWY1NGRiOTBhODFhOTc3NzM0NTVhZDE3NRHo+WI=:
00:35:16.311   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=
00:35:16.311   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:35:16.311   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072
00:35:16.311   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDg2MjlkNDQ4YTI3ZjgxNDkxOTVhZGVhOGQzNmFhNDNhZDA5NzQ5YWY1NGRiOTBhODFhOTc3NzM0NTVhZDE3NRHo+WI=:
00:35:16.311   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]]
00:35:16.311   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4
00:35:16.311   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:35:16.311   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:35:16.311   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072
00:35:16.311   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4
00:35:16.311   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:35:16.311   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072
00:35:16.311   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:16.311   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:16.311   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:16.311    14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:35:16.311    14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:35:16.311    14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:35:16.311    14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:35:16.311    14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:35:16.311    14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:35:16.311    14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:35:16.311    14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:35:16.311    14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:35:16.311    14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:35:16.311    14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:35:16.311   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4
00:35:16.311   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:16.311   14:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:16.570  nvme0n1
00:35:16.570   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:16.570    14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:35:16.570    14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:16.570    14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:16.570    14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:35:16.570    14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:16.570   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:35:16.570   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:35:16.570   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:16.570   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:16.570   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:16.570   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}"
00:35:16.570   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:35:16.570   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0
00:35:16.570   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:35:16.570   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:35:16.570   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096
00:35:16.570   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0
00:35:16.570   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjE2NTU2ZjIxMmNlZjZmYjYyNDVjYTcxMWE4ZmIzMGMjU2T/:
00:35:16.570   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjM1NWM3Zjk3Zjg5ODU1NDQ5MWQ1MGMyNmQ5ZmNlNDNhZDMxZjZmZjczYzUyMTVlYTQzMmQ1MzE4ZWI4YzJiOQZiTRM=:
00:35:16.570   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:35:16.570   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096
00:35:16.570   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjE2NTU2ZjIxMmNlZjZmYjYyNDVjYTcxMWE4ZmIzMGMjU2T/:
00:35:16.570   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjM1NWM3Zjk3Zjg5ODU1NDQ5MWQ1MGMyNmQ5ZmNlNDNhZDMxZjZmZjczYzUyMTVlYTQzMmQ1MzE4ZWI4YzJiOQZiTRM=: ]]
00:35:16.570   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjM1NWM3Zjk3Zjg5ODU1NDQ5MWQ1MGMyNmQ5ZmNlNDNhZDMxZjZmZjczYzUyMTVlYTQzMmQ1MzE4ZWI4YzJiOQZiTRM=:
00:35:16.570   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0
00:35:16.570   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:35:16.570   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:35:16.570   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096
00:35:16.570   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0
00:35:16.570   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:35:16.570   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096
00:35:16.570   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:16.570   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:16.570   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:16.570    14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:35:16.570    14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:35:16.570    14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:35:16.570    14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:35:16.570    14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:35:16.570    14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:35:16.570    14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:35:16.570    14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:35:16.570    14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:35:16.570    14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:35:16.570    14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:35:16.570   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:35:16.570   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:16.570   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:16.829  nvme0n1
00:35:16.829   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:16.829    14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:35:16.829    14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:35:16.829    14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:16.829    14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:17.088    14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:17.088   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:35:17.088   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:35:17.088   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:17.088   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:17.088   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:17.088   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:35:17.088   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1
00:35:17.088   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:35:17.088   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:35:17.088   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096
00:35:17.088   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:35:17.088   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJiM2Y4NGVhZDdkODQ1ZWQyMTFlYTY2YjA1OWJjNmEyZmRiZDJjZWJhZWI3MDk546rTjQ==:
00:35:17.088   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRlYjdiZGM3Y2RiZWE0ZjNhNWY0OWIxM2Y2NGZlODliMzAyYWE0MDk4YzBjZjM4vyDF+g==:
00:35:17.088   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:35:17.088   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096
00:35:17.088   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJiM2Y4NGVhZDdkODQ1ZWQyMTFlYTY2YjA1OWJjNmEyZmRiZDJjZWJhZWI3MDk546rTjQ==:
00:35:17.088   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRlYjdiZGM3Y2RiZWE0ZjNhNWY0OWIxM2Y2NGZlODliMzAyYWE0MDk4YzBjZjM4vyDF+g==: ]]
00:35:17.088   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRlYjdiZGM3Y2RiZWE0ZjNhNWY0OWIxM2Y2NGZlODliMzAyYWE0MDk4YzBjZjM4vyDF+g==:
00:35:17.088   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1
00:35:17.088   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:35:17.088   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:35:17.088   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096
00:35:17.088   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1
00:35:17.088   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:35:17.088   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096
00:35:17.088   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:17.088   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:17.088   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:17.088    14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:35:17.088    14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:35:17.088    14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:35:17.088    14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:35:17.088    14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:35:17.088    14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:35:17.088    14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:35:17.089    14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:35:17.089    14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:35:17.089    14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:35:17.089    14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:35:17.089   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:35:17.089   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:17.089   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:17.348  nvme0n1
00:35:17.348   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:17.348    14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:35:17.348    14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:17.348    14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:17.348    14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:35:17.348    14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:17.348   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:35:17.348   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:35:17.348   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:17.348   14:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:17.348   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:17.348   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:35:17.348   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2
00:35:17.348   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:35:17.348   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:35:17.348   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096
00:35:17.348   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:35:17.348   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmEwNzdiMTE4ODhiZmY1YzZjZjNkNDk1NzJiMjMxNje/HjlO:
00:35:17.348   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2ZjYzRiYmY0MzVjZDBkNmZkN2QxNjA3ZDgxYmQwZWQgUIzh:
00:35:17.348   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:35:17.348   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096
00:35:17.348   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmEwNzdiMTE4ODhiZmY1YzZjZjNkNDk1NzJiMjMxNje/HjlO:
00:35:17.348   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2ZjYzRiYmY0MzVjZDBkNmZkN2QxNjA3ZDgxYmQwZWQgUIzh: ]]
00:35:17.348   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2ZjYzRiYmY0MzVjZDBkNmZkN2QxNjA3ZDgxYmQwZWQgUIzh:
00:35:17.348   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2
00:35:17.348   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:35:17.348   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:35:17.348   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096
00:35:17.348   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2
00:35:17.348   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:35:17.348   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096
00:35:17.348   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:17.348   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:17.348   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:17.348    14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:35:17.348    14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:35:17.348    14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:35:17.348    14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:35:17.348    14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:35:17.348    14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:35:17.348    14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:35:17.348    14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:35:17.348    14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:35:17.348    14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:35:17.348    14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:35:17.348   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:35:17.348   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:17.348   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:17.607  nvme0n1
00:35:17.607   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:17.607    14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:35:17.607    14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:17.607    14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:17.607    14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:35:17.607    14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:17.865   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:35:17.865   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:35:17.865   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:17.865   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:17.865   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:17.865   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:35:17.865   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3
00:35:17.865   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:35:17.865   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:35:17.865   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096
00:35:17.865   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3
00:35:17.865   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzMxMTViZGUxYWYxM2NmOTFlZDgxNzY4NjVlNDM0MTk3NmY4NTczZWY4ZWFhNzkx31Mvig==:
00:35:17.865   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmI5NGYxN2Q4OWVhYzcwNzJiMDVlOWJiNTVlZTJiMzUijv8R:
00:35:17.865   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:35:17.865   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096
00:35:17.865   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzMxMTViZGUxYWYxM2NmOTFlZDgxNzY4NjVlNDM0MTk3NmY4NTczZWY4ZWFhNzkx31Mvig==:
00:35:17.865   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmI5NGYxN2Q4OWVhYzcwNzJiMDVlOWJiNTVlZTJiMzUijv8R: ]]
00:35:17.866   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmI5NGYxN2Q4OWVhYzcwNzJiMDVlOWJiNTVlZTJiMzUijv8R:
00:35:17.866   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3
00:35:17.866   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:35:17.866   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:35:17.866   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096
00:35:17.866   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3
00:35:17.866   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:35:17.866   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096
00:35:17.866   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:17.866   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:17.866   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:17.866    14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:35:17.866    14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:35:17.866    14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:35:17.866    14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:35:17.866    14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:35:17.866    14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:35:17.866    14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:35:17.866    14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:35:17.866    14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:35:17.866    14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:35:17.866    14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:35:17.866   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3
00:35:17.866   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:17.866   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:18.125  nvme0n1
00:35:18.125   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:18.125    14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:35:18.125    14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:18.125    14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:18.125    14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:35:18.125    14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:18.125   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:35:18.125   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:35:18.125   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:18.125   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:18.125   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:18.125   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:35:18.125   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4
00:35:18.125   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:35:18.125   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:35:18.125   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096
00:35:18.125   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4
00:35:18.125   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDg2MjlkNDQ4YTI3ZjgxNDkxOTVhZGVhOGQzNmFhNDNhZDA5NzQ5YWY1NGRiOTBhODFhOTc3NzM0NTVhZDE3NRHo+WI=:
00:35:18.125   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=
00:35:18.125   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:35:18.125   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096
00:35:18.125   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDg2MjlkNDQ4YTI3ZjgxNDkxOTVhZGVhOGQzNmFhNDNhZDA5NzQ5YWY1NGRiOTBhODFhOTc3NzM0NTVhZDE3NRHo+WI=:
00:35:18.125   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]]
00:35:18.125   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4
00:35:18.125   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:35:18.125   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:35:18.125   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096
00:35:18.125   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4
00:35:18.125   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:35:18.125   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096
00:35:18.125   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:18.125   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:18.125   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:18.125    14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:35:18.125    14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:35:18.125    14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:35:18.125    14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:35:18.125    14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:35:18.125    14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:35:18.125    14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:35:18.125    14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:35:18.125    14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:35:18.125    14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:35:18.125    14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:35:18.125   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4
00:35:18.125   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:18.125   14:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:18.384  nvme0n1
00:35:18.384   14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:18.384    14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:35:18.384    14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:35:18.384    14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:18.384    14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:18.643    14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:18.643   14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:35:18.643   14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:35:18.643   14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:18.643   14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:18.643   14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:18.643   14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}"
00:35:18.643   14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:35:18.643   14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0
00:35:18.643   14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:35:18.643   14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:35:18.643   14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144
00:35:18.643   14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0
00:35:18.643   14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjE2NTU2ZjIxMmNlZjZmYjYyNDVjYTcxMWE4ZmIzMGMjU2T/:
00:35:18.643   14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjM1NWM3Zjk3Zjg5ODU1NDQ5MWQ1MGMyNmQ5ZmNlNDNhZDMxZjZmZjczYzUyMTVlYTQzMmQ1MzE4ZWI4YzJiOQZiTRM=:
00:35:18.643   14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:35:18.643   14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144
00:35:18.644   14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjE2NTU2ZjIxMmNlZjZmYjYyNDVjYTcxMWE4ZmIzMGMjU2T/:
00:35:18.644   14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjM1NWM3Zjk3Zjg5ODU1NDQ5MWQ1MGMyNmQ5ZmNlNDNhZDMxZjZmZjczYzUyMTVlYTQzMmQ1MzE4ZWI4YzJiOQZiTRM=: ]]
00:35:18.644   14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjM1NWM3Zjk3Zjg5ODU1NDQ5MWQ1MGMyNmQ5ZmNlNDNhZDMxZjZmZjczYzUyMTVlYTQzMmQ1MzE4ZWI4YzJiOQZiTRM=:
00:35:18.644   14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0
00:35:18.644   14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:35:18.644   14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:35:18.644   14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144
00:35:18.644   14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0
00:35:18.644   14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:35:18.644   14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144
00:35:18.644   14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:18.644   14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:18.644   14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:18.644    14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:35:18.644    14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:35:18.644    14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:35:18.644    14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:35:18.644    14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:35:18.644    14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:35:18.644    14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:35:18.644    14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:35:18.644    14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:35:18.644    14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:35:18.644    14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:35:18.644   14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:35:18.644   14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:18.644   14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:18.903  nvme0n1
00:35:18.903   14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:18.903    14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:35:18.903    14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:18.903    14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:18.903    14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:35:19.162    14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:19.162   14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:35:19.162   14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:35:19.162   14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:19.162   14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:19.162   14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:19.162   14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:35:19.162   14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1
00:35:19.162   14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:35:19.162   14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:35:19.162   14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144
00:35:19.162   14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:35:19.162   14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJiM2Y4NGVhZDdkODQ1ZWQyMTFlYTY2YjA1OWJjNmEyZmRiZDJjZWJhZWI3MDk546rTjQ==:
00:35:19.162   14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRlYjdiZGM3Y2RiZWE0ZjNhNWY0OWIxM2Y2NGZlODliMzAyYWE0MDk4YzBjZjM4vyDF+g==:
00:35:19.162   14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:35:19.162   14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144
00:35:19.162   14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJiM2Y4NGVhZDdkODQ1ZWQyMTFlYTY2YjA1OWJjNmEyZmRiZDJjZWJhZWI3MDk546rTjQ==:
00:35:19.162   14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRlYjdiZGM3Y2RiZWE0ZjNhNWY0OWIxM2Y2NGZlODliMzAyYWE0MDk4YzBjZjM4vyDF+g==: ]]
00:35:19.162   14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRlYjdiZGM3Y2RiZWE0ZjNhNWY0OWIxM2Y2NGZlODliMzAyYWE0MDk4YzBjZjM4vyDF+g==:
00:35:19.162   14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1
00:35:19.162   14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:35:19.162   14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:35:19.162   14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144
00:35:19.162   14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1
00:35:19.162   14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:35:19.162   14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144
00:35:19.162   14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:19.162   14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:19.162   14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:19.162    14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:35:19.162    14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:35:19.162    14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:35:19.162    14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:35:19.162    14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:35:19.162    14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:35:19.162    14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:35:19.162    14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:35:19.162    14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:35:19.162    14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:35:19.162    14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:35:19.162   14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:35:19.163   14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:19.163   14:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:19.422  nvme0n1
00:35:19.422   14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:19.422    14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:35:19.422    14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:35:19.422    14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:19.422    14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:19.422    14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:19.681   14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:35:19.681   14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:35:19.681   14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:19.681   14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:19.681   14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:19.681   14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:35:19.681   14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2
00:35:19.681   14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:35:19.681   14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:35:19.681   14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144
00:35:19.681   14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:35:19.681   14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmEwNzdiMTE4ODhiZmY1YzZjZjNkNDk1NzJiMjMxNje/HjlO:
00:35:19.681   14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2ZjYzRiYmY0MzVjZDBkNmZkN2QxNjA3ZDgxYmQwZWQgUIzh:
00:35:19.681   14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:35:19.681   14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144
00:35:19.681   14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmEwNzdiMTE4ODhiZmY1YzZjZjNkNDk1NzJiMjMxNje/HjlO:
00:35:19.681   14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2ZjYzRiYmY0MzVjZDBkNmZkN2QxNjA3ZDgxYmQwZWQgUIzh: ]]
00:35:19.681   14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2ZjYzRiYmY0MzVjZDBkNmZkN2QxNjA3ZDgxYmQwZWQgUIzh:
00:35:19.681   14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2
00:35:19.681   14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:35:19.681   14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:35:19.681   14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144
00:35:19.681   14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2
00:35:19.681   14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:35:19.681   14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144
00:35:19.681   14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:19.681   14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:19.681   14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:19.681    14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:35:19.681    14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:35:19.681    14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:35:19.681    14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:35:19.681    14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:35:19.681    14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:35:19.681    14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:35:19.681    14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:35:19.681    14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:35:19.681    14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:35:19.681    14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:35:19.681   14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:35:19.681   14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:19.681   14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:19.941  nvme0n1
00:35:19.941   14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:19.941    14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:35:19.941    14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:19.941    14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:35:19.941    14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:19.941    14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:19.941   14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:35:19.941   14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:35:19.941   14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:19.941   14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:20.200   14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:20.200   14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:35:20.200   14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3
00:35:20.200   14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:35:20.200   14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:35:20.200   14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144
00:35:20.200   14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3
00:35:20.200   14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzMxMTViZGUxYWYxM2NmOTFlZDgxNzY4NjVlNDM0MTk3NmY4NTczZWY4ZWFhNzkx31Mvig==:
00:35:20.200   14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmI5NGYxN2Q4OWVhYzcwNzJiMDVlOWJiNTVlZTJiMzUijv8R:
00:35:20.200   14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:35:20.200   14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144
00:35:20.200   14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzMxMTViZGUxYWYxM2NmOTFlZDgxNzY4NjVlNDM0MTk3NmY4NTczZWY4ZWFhNzkx31Mvig==:
00:35:20.200   14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmI5NGYxN2Q4OWVhYzcwNzJiMDVlOWJiNTVlZTJiMzUijv8R: ]]
00:35:20.200   14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmI5NGYxN2Q4OWVhYzcwNzJiMDVlOWJiNTVlZTJiMzUijv8R:
00:35:20.200   14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3
00:35:20.200   14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:35:20.200   14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:35:20.200   14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144
00:35:20.200   14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3
00:35:20.200   14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:35:20.200   14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144
00:35:20.200   14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:20.200   14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:20.200   14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:20.200    14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:35:20.200    14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:35:20.200    14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:35:20.200    14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:35:20.200    14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:35:20.200    14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:35:20.200    14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:35:20.200    14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:35:20.200    14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:35:20.200    14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:35:20.200    14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:35:20.200   14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3
00:35:20.200   14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:20.200   14:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:20.459  nvme0n1
00:35:20.459   14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:20.459    14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:35:20.459    14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:35:20.459    14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:20.459    14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:20.459    14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:20.459   14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:35:20.459   14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:35:20.460   14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:20.460   14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:20.719   14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:20.719   14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:35:20.719   14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4
00:35:20.719   14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:35:20.719   14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:35:20.719   14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144
00:35:20.719   14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4
00:35:20.719   14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDg2MjlkNDQ4YTI3ZjgxNDkxOTVhZGVhOGQzNmFhNDNhZDA5NzQ5YWY1NGRiOTBhODFhOTc3NzM0NTVhZDE3NRHo+WI=:
00:35:20.719   14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=
00:35:20.719   14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:35:20.719   14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144
00:35:20.719   14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDg2MjlkNDQ4YTI3ZjgxNDkxOTVhZGVhOGQzNmFhNDNhZDA5NzQ5YWY1NGRiOTBhODFhOTc3NzM0NTVhZDE3NRHo+WI=:
00:35:20.719   14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]]
00:35:20.719   14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4
00:35:20.719   14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:35:20.719   14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:35:20.719   14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144
00:35:20.719   14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4
00:35:20.719   14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:35:20.719   14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144
00:35:20.719   14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:20.719   14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:20.719   14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:20.719    14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:35:20.719    14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:35:20.719    14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:35:20.719    14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:35:20.719    14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:35:20.719    14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:35:20.719    14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:35:20.719    14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:35:20.719    14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:35:20.719    14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:35:20.719    14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:35:20.719   14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4
00:35:20.719   14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:20.719   14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:20.978  nvme0n1
00:35:20.978   14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:20.978    14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:35:20.978    14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:20.978    14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:20.978    14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:35:20.978    14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:21.237   14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:35:21.237   14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:35:21.237   14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:21.237   14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:21.237   14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:21.237   14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}"
00:35:21.237   14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:35:21.237   14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0
00:35:21.237   14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:35:21.237   14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:35:21.237   14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192
00:35:21.237   14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0
00:35:21.237   14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjE2NTU2ZjIxMmNlZjZmYjYyNDVjYTcxMWE4ZmIzMGMjU2T/:
00:35:21.237   14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjM1NWM3Zjk3Zjg5ODU1NDQ5MWQ1MGMyNmQ5ZmNlNDNhZDMxZjZmZjczYzUyMTVlYTQzMmQ1MzE4ZWI4YzJiOQZiTRM=:
00:35:21.237   14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:35:21.237   14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192
00:35:21.237   14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjE2NTU2ZjIxMmNlZjZmYjYyNDVjYTcxMWE4ZmIzMGMjU2T/:
00:35:21.237   14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjM1NWM3Zjk3Zjg5ODU1NDQ5MWQ1MGMyNmQ5ZmNlNDNhZDMxZjZmZjczYzUyMTVlYTQzMmQ1MzE4ZWI4YzJiOQZiTRM=: ]]
00:35:21.237   14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjM1NWM3Zjk3Zjg5ODU1NDQ5MWQ1MGMyNmQ5ZmNlNDNhZDMxZjZmZjczYzUyMTVlYTQzMmQ1MzE4ZWI4YzJiOQZiTRM=:
00:35:21.237   14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0
00:35:21.237   14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:35:21.237   14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:35:21.237   14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192
00:35:21.237   14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0
00:35:21.237   14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:35:21.237   14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192
00:35:21.237   14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:21.237   14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:21.237   14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:21.237    14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:35:21.237    14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:35:21.237    14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:35:21.237    14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:35:21.237    14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:35:21.237    14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:35:21.237    14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:35:21.237    14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:35:21.237    14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:35:21.237    14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:35:21.237    14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:35:21.237   14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:35:21.237   14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:21.237   14:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:21.812  nvme0n1
00:35:21.812   14:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:21.812    14:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:35:21.812    14:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:21.812    14:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:21.812    14:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:35:21.812    14:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:21.812   14:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:35:21.812   14:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:35:21.812   14:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:21.812   14:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:21.812   14:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:21.812   14:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:35:21.812   14:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1
00:35:21.812   14:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:35:21.812   14:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:35:21.812   14:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192
00:35:21.812   14:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:35:21.812   14:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJiM2Y4NGVhZDdkODQ1ZWQyMTFlYTY2YjA1OWJjNmEyZmRiZDJjZWJhZWI3MDk546rTjQ==:
00:35:21.812   14:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRlYjdiZGM3Y2RiZWE0ZjNhNWY0OWIxM2Y2NGZlODliMzAyYWE0MDk4YzBjZjM4vyDF+g==:
00:35:21.812   14:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:35:21.812   14:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192
00:35:21.812   14:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJiM2Y4NGVhZDdkODQ1ZWQyMTFlYTY2YjA1OWJjNmEyZmRiZDJjZWJhZWI3MDk546rTjQ==:
00:35:21.812   14:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRlYjdiZGM3Y2RiZWE0ZjNhNWY0OWIxM2Y2NGZlODliMzAyYWE0MDk4YzBjZjM4vyDF+g==: ]]
00:35:21.812   14:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRlYjdiZGM3Y2RiZWE0ZjNhNWY0OWIxM2Y2NGZlODliMzAyYWE0MDk4YzBjZjM4vyDF+g==:
00:35:21.812   14:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1
00:35:21.812   14:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:35:21.812   14:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:35:21.812   14:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192
00:35:21.812   14:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1
00:35:21.812   14:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:35:21.812   14:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192
00:35:21.812   14:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:21.812   14:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:21.812   14:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:21.812    14:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:35:21.812    14:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:35:21.812    14:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:35:21.812    14:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:35:21.812    14:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:35:21.812    14:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:35:21.812    14:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:35:21.812    14:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:35:21.812    14:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:35:21.812    14:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:35:21.812    14:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:35:21.812   14:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:35:21.812   14:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:21.812   14:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:22.380  nvme0n1
00:35:22.380   14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:22.381    14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:35:22.381    14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:22.381    14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:22.381    14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:35:22.381    14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:22.381   14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:35:22.381   14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:35:22.381   14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:22.381   14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:22.640   14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:22.640   14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:35:22.640   14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2
00:35:22.640   14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:35:22.640   14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:35:22.640   14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192
00:35:22.640   14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:35:22.640   14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmEwNzdiMTE4ODhiZmY1YzZjZjNkNDk1NzJiMjMxNje/HjlO:
00:35:22.640   14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2ZjYzRiYmY0MzVjZDBkNmZkN2QxNjA3ZDgxYmQwZWQgUIzh:
00:35:22.640   14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:35:22.640   14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192
00:35:22.640   14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmEwNzdiMTE4ODhiZmY1YzZjZjNkNDk1NzJiMjMxNje/HjlO:
00:35:22.640   14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2ZjYzRiYmY0MzVjZDBkNmZkN2QxNjA3ZDgxYmQwZWQgUIzh: ]]
00:35:22.640   14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2ZjYzRiYmY0MzVjZDBkNmZkN2QxNjA3ZDgxYmQwZWQgUIzh:
00:35:22.640   14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2
00:35:22.640   14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:35:22.640   14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:35:22.640   14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192
00:35:22.640   14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2
00:35:22.640   14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:35:22.640   14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192
00:35:22.640   14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:22.640   14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:22.640   14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:22.640    14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:35:22.640    14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:35:22.640    14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:35:22.640    14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:35:22.640    14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:35:22.640    14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:35:22.640    14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:35:22.640    14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:35:22.640    14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:35:22.640    14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:35:22.640    14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:35:22.640   14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:35:22.640   14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:22.640   14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:23.208  nvme0n1
00:35:23.208   14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:23.208    14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:35:23.208    14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:23.208    14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:23.208    14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:35:23.208    14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:23.208   14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:35:23.208   14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:35:23.208   14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:23.208   14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:23.208   14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:23.208   14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:35:23.208   14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3
00:35:23.208   14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:35:23.208   14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:35:23.208   14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192
00:35:23.208   14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3
00:35:23.208   14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzMxMTViZGUxYWYxM2NmOTFlZDgxNzY4NjVlNDM0MTk3NmY4NTczZWY4ZWFhNzkx31Mvig==:
00:35:23.208   14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmI5NGYxN2Q4OWVhYzcwNzJiMDVlOWJiNTVlZTJiMzUijv8R:
00:35:23.208   14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:35:23.208   14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192
00:35:23.208   14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzMxMTViZGUxYWYxM2NmOTFlZDgxNzY4NjVlNDM0MTk3NmY4NTczZWY4ZWFhNzkx31Mvig==:
00:35:23.208   14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmI5NGYxN2Q4OWVhYzcwNzJiMDVlOWJiNTVlZTJiMzUijv8R: ]]
00:35:23.208   14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmI5NGYxN2Q4OWVhYzcwNzJiMDVlOWJiNTVlZTJiMzUijv8R:
00:35:23.208   14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3
00:35:23.208   14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:35:23.208   14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:35:23.208   14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192
00:35:23.208   14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3
00:35:23.208   14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:35:23.208   14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192
00:35:23.208   14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:23.208   14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:23.208   14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:23.208    14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:35:23.208    14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:35:23.208    14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:35:23.208    14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:35:23.208    14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:35:23.208    14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:35:23.208    14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:35:23.208    14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:35:23.208    14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:35:23.208    14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:35:23.208    14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:35:23.208   14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3
00:35:23.208   14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:23.208   14:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:23.777  nvme0n1
00:35:23.777   14:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:23.777    14:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:35:23.777    14:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:23.777    14:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:23.777    14:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:35:23.777    14:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:23.777   14:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:35:23.777   14:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:35:23.777   14:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:23.777   14:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:24.035   14:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:24.035   14:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:35:24.035   14:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4
00:35:24.035   14:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:35:24.035   14:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:35:24.035   14:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192
00:35:24.035   14:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4
00:35:24.036   14:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDg2MjlkNDQ4YTI3ZjgxNDkxOTVhZGVhOGQzNmFhNDNhZDA5NzQ5YWY1NGRiOTBhODFhOTc3NzM0NTVhZDE3NRHo+WI=:
00:35:24.036   14:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=
00:35:24.036   14:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:35:24.036   14:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192
00:35:24.036   14:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDg2MjlkNDQ4YTI3ZjgxNDkxOTVhZGVhOGQzNmFhNDNhZDA5NzQ5YWY1NGRiOTBhODFhOTc3NzM0NTVhZDE3NRHo+WI=:
00:35:24.036   14:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]]
00:35:24.036   14:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4
00:35:24.036   14:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:35:24.036   14:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:35:24.036   14:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192
00:35:24.036   14:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4
00:35:24.036   14:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:35:24.036   14:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192
00:35:24.036   14:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:24.036   14:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:24.036   14:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:24.036    14:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:35:24.036    14:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:35:24.036    14:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:35:24.036    14:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:35:24.036    14:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:35:24.036    14:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:35:24.036    14:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:35:24.036    14:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:35:24.036    14:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:35:24.036    14:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:35:24.036    14:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:35:24.036   14:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4
00:35:24.036   14:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:24.036   14:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:24.602  nvme0n1
00:35:24.602   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:24.602    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:35:24.602    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:24.602    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:24.602    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:35:24.602    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:24.602   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:35:24.602   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:35:24.602   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:24.602   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:24.602   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:24.602   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1
00:35:24.602   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:35:24.602   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:35:24.602   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:35:24.602   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:35:24.602   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJiM2Y4NGVhZDdkODQ1ZWQyMTFlYTY2YjA1OWJjNmEyZmRiZDJjZWJhZWI3MDk546rTjQ==:
00:35:24.602   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRlYjdiZGM3Y2RiZWE0ZjNhNWY0OWIxM2Y2NGZlODliMzAyYWE0MDk4YzBjZjM4vyDF+g==:
00:35:24.602   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:35:24.602   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:35:24.602   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJiM2Y4NGVhZDdkODQ1ZWQyMTFlYTY2YjA1OWJjNmEyZmRiZDJjZWJhZWI3MDk546rTjQ==:
00:35:24.602   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRlYjdiZGM3Y2RiZWE0ZjNhNWY0OWIxM2Y2NGZlODliMzAyYWE0MDk4YzBjZjM4vyDF+g==: ]]
00:35:24.602   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRlYjdiZGM3Y2RiZWE0ZjNhNWY0OWIxM2Y2NGZlODliMzAyYWE0MDk4YzBjZjM4vyDF+g==:
00:35:24.602   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048
00:35:24.602   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:24.602   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:24.602   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:24.602    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip
00:35:24.602    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:35:24.602    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:35:24.602    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:35:24.602    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:35:24.602    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:35:24.602    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:35:24.602    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:35:24.602    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:35:24.602    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:35:24.602    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:35:24.602   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0
00:35:24.602   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0
00:35:24.602   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0
00:35:24.602   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:35:24.602   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:35:24.602    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:35:24.602   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:35:24.602   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0
00:35:24.602   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:24.602   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:24.602  request:
00:35:24.602  {
00:35:24.602  "name": "nvme0",
00:35:24.602  "trtype": "rdma",
00:35:24.602  "traddr": "192.168.100.8",
00:35:24.602  "adrfam": "ipv4",
00:35:24.602  "trsvcid": "4420",
00:35:24.602  "subnqn": "nqn.2024-02.io.spdk:cnode0",
00:35:24.602  "hostnqn": "nqn.2024-02.io.spdk:host0",
00:35:24.602  "prchk_reftag": false,
00:35:24.602  "prchk_guard": false,
00:35:24.602  "hdgst": false,
00:35:24.602  "ddgst": false,
00:35:24.602  "allow_unrecognized_csi": false,
00:35:24.602  "method": "bdev_nvme_attach_controller",
00:35:24.602  "req_id": 1
00:35:24.602  }
00:35:24.602  Got JSON-RPC error response
00:35:24.602  response:
00:35:24.602  {
00:35:24.602  "code": -5,
00:35:24.602  "message": "Input/output error"
00:35:24.602  }
00:35:24.602   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:35:24.602   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1
00:35:24.602   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:35:24.603   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:35:24.603   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:35:24.861    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers
00:35:24.861    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length
00:35:24.861    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:24.861    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:24.861    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:24.861   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 ))
00:35:24.861    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip
00:35:24.861    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:35:24.861    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:35:24.861    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:35:24.861    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:35:24.861    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:35:24.861    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:35:24.861    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:35:24.861    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:35:24.861    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:35:24.861    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:35:24.861   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2
00:35:24.861   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0
00:35:24.861   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2
00:35:24.861   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:35:24.861   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:35:24.861    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:35:24.861   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:35:24.861   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2
00:35:24.861   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:24.861   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:24.861  request:
00:35:24.861  {
00:35:24.861  "name": "nvme0",
00:35:24.861  "trtype": "rdma",
00:35:24.861  "traddr": "192.168.100.8",
00:35:24.861  "adrfam": "ipv4",
00:35:24.861  "trsvcid": "4420",
00:35:24.861  "subnqn": "nqn.2024-02.io.spdk:cnode0",
00:35:24.861  "hostnqn": "nqn.2024-02.io.spdk:host0",
00:35:24.861  "prchk_reftag": false,
00:35:24.861  "prchk_guard": false,
00:35:24.861  "hdgst": false,
00:35:24.861  "ddgst": false,
00:35:24.861  "dhchap_key": "key2",
00:35:24.861  "allow_unrecognized_csi": false,
00:35:24.861  "method": "bdev_nvme_attach_controller",
00:35:24.861  "req_id": 1
00:35:24.861  }
00:35:24.861  Got JSON-RPC error response
00:35:24.861  response:
00:35:24.861  {
00:35:24.861  "code": -5,
00:35:24.861  "message": "Input/output error"
00:35:24.861  }
00:35:24.861   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:35:24.861   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1
00:35:24.861   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:35:24.861   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:35:24.861   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:35:24.861    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers
00:35:24.861    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length
00:35:24.861    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:24.861    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:24.861    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:24.861   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 ))
00:35:24.861    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip
00:35:24.861    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:35:24.861    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:35:24.861    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:35:24.861    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:35:24.861    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:35:24.861    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:35:24.861    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:35:24.861    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:35:24.861    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:35:24.861    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:35:24.861   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2
00:35:24.861   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0
00:35:24.861   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2
00:35:24.861   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:35:24.861   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:35:24.861    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:35:24.861   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:35:24.861   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2
00:35:24.861   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:24.861   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:25.121  request:
00:35:25.121  {
00:35:25.121  "name": "nvme0",
00:35:25.121  "trtype": "rdma",
00:35:25.121  "traddr": "192.168.100.8",
00:35:25.121  "adrfam": "ipv4",
00:35:25.121  "trsvcid": "4420",
00:35:25.121  "subnqn": "nqn.2024-02.io.spdk:cnode0",
00:35:25.121  "hostnqn": "nqn.2024-02.io.spdk:host0",
00:35:25.121  "prchk_reftag": false,
00:35:25.121  "prchk_guard": false,
00:35:25.121  "hdgst": false,
00:35:25.121  "ddgst": false,
00:35:25.121  "dhchap_key": "key1",
00:35:25.121  "dhchap_ctrlr_key": "ckey2",
00:35:25.121  "allow_unrecognized_csi": false,
00:35:25.121  "method": "bdev_nvme_attach_controller",
00:35:25.121  "req_id": 1
00:35:25.121  }
00:35:25.121  Got JSON-RPC error response
00:35:25.121  response:
00:35:25.121  {
00:35:25.121  "code": -5,
00:35:25.121  "message": "Input/output error"
00:35:25.121  }
00:35:25.121   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:35:25.121   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1
00:35:25.121   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:35:25.121   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:35:25.121   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:35:25.121    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip
00:35:25.121    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:35:25.121    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:35:25.121    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:35:25.121    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:35:25.121    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:35:25.121    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:35:25.121    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:35:25.121    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:35:25.121    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:35:25.121    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:35:25.121   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1
00:35:25.121   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:25.121   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:25.380  nvme0n1
00:35:25.380   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:25.380   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2
00:35:25.380   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:35:25.380   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:35:25.380   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:35:25.380   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:35:25.380   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmEwNzdiMTE4ODhiZmY1YzZjZjNkNDk1NzJiMjMxNje/HjlO:
00:35:25.380   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2ZjYzRiYmY0MzVjZDBkNmZkN2QxNjA3ZDgxYmQwZWQgUIzh:
00:35:25.380   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:35:25.380   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:35:25.380   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmEwNzdiMTE4ODhiZmY1YzZjZjNkNDk1NzJiMjMxNje/HjlO:
00:35:25.380   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2ZjYzRiYmY0MzVjZDBkNmZkN2QxNjA3ZDgxYmQwZWQgUIzh: ]]
00:35:25.380   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2ZjYzRiYmY0MzVjZDBkNmZkN2QxNjA3ZDgxYmQwZWQgUIzh:
00:35:25.380   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:35:25.380   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:25.380   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:25.380   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:25.380    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers
00:35:25.380    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name'
00:35:25.380    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:25.380    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:25.380    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:25.380   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:35:25.380   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2
00:35:25.380   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0
00:35:25.380   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2
00:35:25.380   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:35:25.380   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:35:25.380    14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:35:25.380   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:35:25.380   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2
00:35:25.380   14:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:25.380   14:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:25.380  request:
00:35:25.380  {
00:35:25.380  "name": "nvme0",
00:35:25.380  "dhchap_key": "key1",
00:35:25.380  "dhchap_ctrlr_key": "ckey2",
00:35:25.380  "method": "bdev_nvme_set_keys",
00:35:25.380  "req_id": 1
00:35:25.380  }
00:35:25.380  Got JSON-RPC error response
00:35:25.380  response:
00:35:25.380  {
00:35:25.380  "code": -13,
00:35:25.380  "message": "Permission denied"
00:35:25.380  }
00:35:25.380   14:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:35:25.380   14:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1
00:35:25.380   14:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:35:25.380   14:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:35:25.380   14:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:35:25.380    14:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers
00:35:25.380    14:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:25.380    14:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:25.380    14:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length
00:35:25.380    14:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:25.380   14:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 ))
00:35:25.380   14:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s
00:35:26.756    14:01:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers
00:35:26.756    14:01:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length
00:35:26.756    14:01:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:26.756    14:01:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:26.756    14:01:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:26.756   14:01:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 ))
00:35:26.756   14:01:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s
00:35:27.690    14:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers
00:35:27.690    14:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:27.690    14:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length
00:35:27.690    14:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:27.690    14:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:27.690   14:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 ))
00:35:27.690   14:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1
00:35:27.690   14:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:35:27.690   14:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:35:27.690   14:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:35:27.690   14:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:35:27.690   14:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJiM2Y4NGVhZDdkODQ1ZWQyMTFlYTY2YjA1OWJjNmEyZmRiZDJjZWJhZWI3MDk546rTjQ==:
00:35:27.690   14:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRlYjdiZGM3Y2RiZWE0ZjNhNWY0OWIxM2Y2NGZlODliMzAyYWE0MDk4YzBjZjM4vyDF+g==:
00:35:27.690   14:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:35:27.690   14:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:35:27.690   14:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJiM2Y4NGVhZDdkODQ1ZWQyMTFlYTY2YjA1OWJjNmEyZmRiZDJjZWJhZWI3MDk546rTjQ==:
00:35:27.691   14:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRlYjdiZGM3Y2RiZWE0ZjNhNWY0OWIxM2Y2NGZlODliMzAyYWE0MDk4YzBjZjM4vyDF+g==: ]]
00:35:27.691   14:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRlYjdiZGM3Y2RiZWE0ZjNhNWY0OWIxM2Y2NGZlODliMzAyYWE0MDk4YzBjZjM4vyDF+g==:
00:35:27.691    14:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip
00:35:27.691    14:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:35:27.691    14:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:35:27.691    14:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:35:27.691    14:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:35:27.691    14:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:35:27.691    14:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]]
00:35:27.691    14:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]]
00:35:27.691    14:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP
00:35:27.691    14:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]]
00:35:27.691    14:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8
00:35:27.691   14:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1
00:35:27.691   14:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:27.691   14:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:27.691  nvme0n1
00:35:27.691   14:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:27.691   14:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2
00:35:27.691   14:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:35:27.691   14:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:35:27.691   14:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:35:27.691   14:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:35:27.691   14:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmEwNzdiMTE4ODhiZmY1YzZjZjNkNDk1NzJiMjMxNje/HjlO:
00:35:27.691   14:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2ZjYzRiYmY0MzVjZDBkNmZkN2QxNjA3ZDgxYmQwZWQgUIzh:
00:35:27.691   14:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:35:27.691   14:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:35:27.691   14:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmEwNzdiMTE4ODhiZmY1YzZjZjNkNDk1NzJiMjMxNje/HjlO:
00:35:27.691   14:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2ZjYzRiYmY0MzVjZDBkNmZkN2QxNjA3ZDgxYmQwZWQgUIzh: ]]
00:35:27.691   14:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2ZjYzRiYmY0MzVjZDBkNmZkN2QxNjA3ZDgxYmQwZWQgUIzh:
00:35:27.691   14:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1
00:35:27.691   14:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0
00:35:27.691   14:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1
00:35:27.691   14:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:35:27.691   14:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:35:27.691    14:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:35:27.691   14:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:35:27.691   14:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1
00:35:27.691   14:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:27.691   14:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:27.949  request:
00:35:27.949  {
00:35:27.949  "name": "nvme0",
00:35:27.949  "dhchap_key": "key2",
00:35:27.949  "dhchap_ctrlr_key": "ckey1",
00:35:27.949  "method": "bdev_nvme_set_keys",
00:35:27.949  "req_id": 1
00:35:27.949  }
00:35:27.949  Got JSON-RPC error response
00:35:27.949  response:
00:35:27.949  {
00:35:27.949  "code": -13,
00:35:27.949  "message": "Permission denied"
00:35:27.949  }
00:35:27.949   14:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:35:27.949   14:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1
00:35:27.949   14:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:35:27.949   14:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:35:27.949   14:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:35:27.949    14:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length
00:35:27.949    14:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers
00:35:27.949    14:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:27.949    14:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:27.949    14:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:27.949   14:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 ))
00:35:27.949   14:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s
00:35:28.884    14:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers
00:35:28.884    14:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length
00:35:28.884    14:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:28.884    14:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:28.884    14:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:28.884   14:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 ))
00:35:28.884   14:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s
00:35:30.258    14:01:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers
00:35:30.258    14:01:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length
00:35:30.258    14:01:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:30.258    14:01:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:30.258    14:01:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:30.258   14:01:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 ))
00:35:30.258   14:01:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT
00:35:30.258   14:01:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup
00:35:30.258   14:01:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini
00:35:30.258   14:01:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup
00:35:30.258   14:01:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync
00:35:30.258   14:01:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' rdma == tcp ']'
00:35:30.258   14:01:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' rdma == rdma ']'
00:35:30.258   14:01:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e
00:35:30.258   14:01:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20}
00:35:30.258   14:01:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma
00:35:30.258  rmmod nvme_rdma
00:35:30.258  rmmod nvme_fabrics
00:35:30.258   14:01:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:35:30.258   14:01:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e
00:35:30.258   14:01:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0
00:35:30.258   14:01:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 3523773 ']'
00:35:30.258   14:01:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 3523773
00:35:30.258   14:01:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 3523773 ']'
00:35:30.258   14:01:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 3523773
00:35:30.258    14:01:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname
00:35:30.258   14:01:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:35:30.258    14:01:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3523773
00:35:30.258   14:01:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:35:30.258   14:01:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:35:30.258   14:01:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3523773'
00:35:30.258  killing process with pid 3523773
00:35:30.258   14:01:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 3523773
00:35:30.259   14:01:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 3523773
00:35:31.194   14:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:35:31.194   14:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]]
00:35:31.194   14:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0
00:35:31.194   14:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0
00:35:31.194   14:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target
00:35:31.194   14:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]]
00:35:31.194   14:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0
00:35:31.194   14:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0
00:35:31.194   14:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1
00:35:31.194   14:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1
00:35:31.194   14:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0
00:35:31.194   14:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*)
00:35:31.194   14:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_rdma nvmet
00:35:31.194   14:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh
00:35:34.478  0000:00:04.7 (8086 2021): ioatdma -> vfio-pci
00:35:34.478  0000:00:04.6 (8086 2021): ioatdma -> vfio-pci
00:35:34.478  0000:00:04.5 (8086 2021): ioatdma -> vfio-pci
00:35:34.478  0000:00:04.4 (8086 2021): ioatdma -> vfio-pci
00:35:34.478  0000:00:04.3 (8086 2021): ioatdma -> vfio-pci
00:35:34.478  0000:00:04.2 (8086 2021): ioatdma -> vfio-pci
00:35:34.478  0000:00:04.1 (8086 2021): ioatdma -> vfio-pci
00:35:34.478  0000:00:04.0 (8086 2021): ioatdma -> vfio-pci
00:35:34.478  0000:80:04.7 (8086 2021): ioatdma -> vfio-pci
00:35:34.478  0000:80:04.6 (8086 2021): ioatdma -> vfio-pci
00:35:34.478  0000:80:04.5 (8086 2021): ioatdma -> vfio-pci
00:35:34.478  0000:80:04.4 (8086 2021): ioatdma -> vfio-pci
00:35:34.478  0000:80:04.3 (8086 2021): ioatdma -> vfio-pci
00:35:34.478  0000:80:04.2 (8086 2021): ioatdma -> vfio-pci
00:35:34.478  0000:80:04.1 (8086 2021): ioatdma -> vfio-pci
00:35:34.478  0000:80:04.0 (8086 2021): ioatdma -> vfio-pci
00:35:36.381  0000:d8:00.0 (8086 0a54): nvme -> vfio-pci
00:35:36.381   14:01:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.nIE /tmp/spdk.key-null.JRk /tmp/spdk.key-sha256.xYv /tmp/spdk.key-sha384.aSN /tmp/spdk.key-sha512.N7k /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log
00:35:36.381   14:01:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh
00:35:39.667  0000:00:04.7 (8086 2021): Already using the vfio-pci driver
00:35:39.667  0000:00:04.6 (8086 2021): Already using the vfio-pci driver
00:35:39.667  0000:00:04.5 (8086 2021): Already using the vfio-pci driver
00:35:39.667  0000:00:04.4 (8086 2021): Already using the vfio-pci driver
00:35:39.667  0000:00:04.3 (8086 2021): Already using the vfio-pci driver
00:35:39.667  0000:00:04.2 (8086 2021): Already using the vfio-pci driver
00:35:39.667  0000:00:04.1 (8086 2021): Already using the vfio-pci driver
00:35:39.667  0000:00:04.0 (8086 2021): Already using the vfio-pci driver
00:35:39.667  0000:80:04.7 (8086 2021): Already using the vfio-pci driver
00:35:39.667  0000:80:04.6 (8086 2021): Already using the vfio-pci driver
00:35:39.667  0000:80:04.5 (8086 2021): Already using the vfio-pci driver
00:35:39.667  0000:80:04.4 (8086 2021): Already using the vfio-pci driver
00:35:39.667  0000:80:04.3 (8086 2021): Already using the vfio-pci driver
00:35:39.667  0000:80:04.2 (8086 2021): Already using the vfio-pci driver
00:35:39.667  0000:80:04.1 (8086 2021): Already using the vfio-pci driver
00:35:39.667  0000:80:04.0 (8086 2021): Already using the vfio-pci driver
00:35:39.667  0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver
00:35:39.667  
00:35:39.667  real	1m2.976s
00:35:39.667  user	0m55.622s
00:35:39.667  sys	0m15.734s
00:35:39.667   14:01:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable
00:35:39.667   14:01:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:35:39.667  ************************************
00:35:39.667  END TEST nvmf_auth_host
00:35:39.667  ************************************
00:35:39.667   14:01:38 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ rdma == \t\c\p ]]
00:35:39.667   14:01:38 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]]
00:35:39.667   14:01:38 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]]
00:35:39.667   14:01:38 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]]
00:35:39.667   14:01:38 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma
00:35:39.667   14:01:38 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:35:39.667   14:01:38 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable
00:35:39.667   14:01:38 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:35:39.667  ************************************
00:35:39.667  START TEST nvmf_bdevperf
00:35:39.667  ************************************
00:35:39.667   14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma
00:35:39.667  * Looking for test storage...
00:35:39.667  * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host
00:35:39.667    14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:35:39.667     14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version
00:35:39.667     14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:35:39.667    14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:35:39.667    14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:35:39.667    14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l
00:35:39.667    14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l
00:35:39.667    14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-:
00:35:39.667    14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1
00:35:39.667    14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-:
00:35:39.667    14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2
00:35:39.667    14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<'
00:35:39.667    14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2
00:35:39.667    14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1
00:35:39.667    14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:35:39.667    14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in
00:35:39.668    14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1
00:35:39.668    14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 ))
00:35:39.668    14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:35:39.668     14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1
00:35:39.668     14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1
00:35:39.668     14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:35:39.668     14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1
00:35:39.668    14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1
00:35:39.668     14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2
00:35:39.668     14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2
00:35:39.668     14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:35:39.668     14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2
00:35:39.668    14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2
00:35:39.668    14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:35:39.668    14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:35:39.668    14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0
00:35:39.668    14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:35:39.668    14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:35:39.668  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:35:39.668  		--rc genhtml_branch_coverage=1
00:35:39.668  		--rc genhtml_function_coverage=1
00:35:39.668  		--rc genhtml_legend=1
00:35:39.668  		--rc geninfo_all_blocks=1
00:35:39.668  		--rc geninfo_unexecuted_blocks=1
00:35:39.668  		
00:35:39.668  		'
00:35:39.668    14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:35:39.668  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:35:39.668  		--rc genhtml_branch_coverage=1
00:35:39.668  		--rc genhtml_function_coverage=1
00:35:39.668  		--rc genhtml_legend=1
00:35:39.668  		--rc geninfo_all_blocks=1
00:35:39.668  		--rc geninfo_unexecuted_blocks=1
00:35:39.668  		
00:35:39.668  		'
00:35:39.668    14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:35:39.668  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:35:39.668  		--rc genhtml_branch_coverage=1
00:35:39.668  		--rc genhtml_function_coverage=1
00:35:39.668  		--rc genhtml_legend=1
00:35:39.668  		--rc geninfo_all_blocks=1
00:35:39.668  		--rc geninfo_unexecuted_blocks=1
00:35:39.668  		
00:35:39.668  		'
00:35:39.668    14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:35:39.668  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:35:39.668  		--rc genhtml_branch_coverage=1
00:35:39.668  		--rc genhtml_function_coverage=1
00:35:39.668  		--rc genhtml_legend=1
00:35:39.668  		--rc geninfo_all_blocks=1
00:35:39.668  		--rc geninfo_unexecuted_blocks=1
00:35:39.668  		
00:35:39.668  		'
00:35:39.668   14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh
00:35:39.668     14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s
00:35:39.668    14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:35:39.668    14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:35:39.668    14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:35:39.668    14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:35:39.668    14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:35:39.668    14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:35:39.668    14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:35:39.668    14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:35:39.668    14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:35:39.668     14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:35:39.668    14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:35:39.668    14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e
00:35:39.668    14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:35:39.668    14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:35:39.668    14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:35:39.668    14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:35:39.668    14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh
00:35:39.668     14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob
00:35:39.668     14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:35:39.668     14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:35:39.668     14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:35:39.668      14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:35:39.668      14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:35:39.668      14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:35:39.668      14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH
00:35:39.668      14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:35:39.668    14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0
00:35:39.668    14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:35:39.668    14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:35:39.668    14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:35:39.668    14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:35:39.668    14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:35:39.668    14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:35:39.668  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:35:39.668    14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:35:39.668    14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:35:39.668    14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0
00:35:39.668   14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64
00:35:39.668   14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:35:39.668   14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit
00:35:39.668   14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z rdma ']'
00:35:39.668   14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:35:39.668   14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs
00:35:39.668   14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no
00:35:39.668   14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns
00:35:39.668   14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:35:39.668   14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:35:39.668    14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:35:39.668   14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:35:39.668   14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:35:39.668   14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable
00:35:39.668   14:01:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:35:47.782   14:01:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:35:47.782   14:01:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=()
00:35:47.782   14:01:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs
00:35:47.782   14:01:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=()
00:35:47.782   14:01:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:35:47.783   14:01:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=()
00:35:47.783   14:01:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers
00:35:47.783   14:01:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=()
00:35:47.783   14:01:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs
00:35:47.783   14:01:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=()
00:35:47.783   14:01:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810
00:35:47.783   14:01:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=()
00:35:47.783   14:01:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=()
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ rdma == rdma ]]
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}")
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}")
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]]
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}")
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)'
00:35:47.783  Found 0000:d9:00.0 (0x15b3 - 0x1015)
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)'
00:35:47.783  Found 0000:d9:00.1 (0x15b3 - 0x1015)
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]]
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0'
00:35:47.783  Found net devices under 0000:d9:00.0: mlx_0_0
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1'
00:35:47.783  Found net devices under 0000:d9:00.1: mlx_0_1
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ rdma == tcp ]]
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@447 -- # [[ rdma == rdma ]]
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # rdma_device_init
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@529 -- # load_ib_rdma_modules
00:35:47.783    14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@62 -- # uname
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']'
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@66 -- # modprobe ib_cm
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@67 -- # modprobe ib_core
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@68 -- # modprobe ib_umad
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@69 -- # modprobe ib_uverbs
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@70 -- # modprobe iw_cm
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@71 -- # modprobe rdma_cm
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@72 -- # modprobe rdma_ucm
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@530 -- # allocate_nic_ips
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR ))
00:35:47.783    14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # get_rdma_if_list
00:35:47.783    14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:35:47.783    14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:35:47.783     14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:35:47.783     14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:35:47.783    14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:35:47.783    14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:35:47.783    14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:35:47.783    14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:35:47.783    14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_0
00:35:47.783    14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2
00:35:47.783    14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:35:47.783    14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:35:47.783    14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:35:47.783    14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:35:47.783    14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:35:47.783    14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_1
00:35:47.783    14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:35:47.783    14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0
00:35:47.783    14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:35:47.783    14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:35:47.783    14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}'
00:35:47.783    14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # ip=192.168.100.8
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]]
00:35:47.783   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@85 -- # ip addr show mlx_0_0
00:35:47.783  6: mlx_0_0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:35:47.783      link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff
00:35:47.783      altname enp217s0f0np0
00:35:47.783      altname ens818f0np0
00:35:47.784      inet 192.168.100.8/24 scope global mlx_0_0
00:35:47.784         valid_lft forever preferred_lft forever
00:35:47.784   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:35:47.784    14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1
00:35:47.784    14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:35:47.784    14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:35:47.784    14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}'
00:35:47.784    14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1
00:35:47.784   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # ip=192.168.100.9
00:35:47.784   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]]
00:35:47.784   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@85 -- # ip addr show mlx_0_1
00:35:47.784  7: mlx_0_1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:35:47.784      link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff
00:35:47.784      altname enp217s0f1np1
00:35:47.784      altname ens818f1np1
00:35:47.784      inet 192.168.100.9/24 scope global mlx_0_1
00:35:47.784         valid_lft forever preferred_lft forever
00:35:47.784   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0
00:35:47.784   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:35:47.784   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma'
00:35:47.784   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]]
00:35:47.784    14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # get_available_rdma_ips
00:35:47.784     14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # get_rdma_if_list
00:35:47.784     14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:35:47.784     14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:35:47.784      14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:35:47.784      14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:35:47.784     14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:35:47.784     14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:35:47.784     14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:35:47.784     14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:35:47.784     14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_0
00:35:47.784     14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2
00:35:47.784     14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:35:47.784     14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:35:47.784     14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:35:47.784     14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:35:47.784     14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:35:47.784     14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_1
00:35:47.784     14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2
00:35:47.784    14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:35:47.784    14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0
00:35:47.784    14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:35:47.784    14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:35:47.784    14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}'
00:35:47.784    14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1
00:35:47.784    14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:35:47.784    14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1
00:35:47.784    14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:35:47.784    14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:35:47.784    14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}'
00:35:47.784    14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1
00:35:47.784   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8
00:35:47.784  192.168.100.9'
00:35:47.784    14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # echo '192.168.100.8
00:35:47.784  192.168.100.9'
00:35:47.784    14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # head -n 1
00:35:47.784   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8
00:35:47.784    14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # echo '192.168.100.8
00:35:47.784  192.168.100.9'
00:35:47.784    14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # tail -n +2
00:35:47.784    14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # head -n 1
00:35:47.784   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9
00:35:47.784   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']'
00:35:47.784   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024'
00:35:47.784   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' rdma == tcp ']'
00:35:47.784   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' rdma == rdma ']'
00:35:47.784   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-rdma
00:35:47.784   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init
00:35:47.784   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE
00:35:47.784   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:35:47.784   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable
00:35:47.784   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:35:47.784   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3539263
00:35:47.784   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3539263
00:35:47.784   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE
00:35:47.784   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3539263 ']'
00:35:47.784   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:35:47.784   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100
00:35:47.784   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:35:47.784  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:35:47.784   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable
00:35:47.784   14:01:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:35:47.784  [2024-12-14 14:01:46.358661] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:35:47.784  [2024-12-14 14:01:46.358751] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:35:47.784  [2024-12-14 14:01:46.493334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:35:47.784  [2024-12-14 14:01:46.596701] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:35:47.784  [2024-12-14 14:01:46.596755] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:35:47.784  [2024-12-14 14:01:46.596769] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:35:47.784  [2024-12-14 14:01:46.596798] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:35:47.784  [2024-12-14 14:01:46.596808] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:35:47.784  [2024-12-14 14:01:46.599159] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:35:47.784  [2024-12-14 14:01:46.599220] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:35:47.784  [2024-12-14 14:01:46.599227] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:35:47.784   14:01:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:35:47.784   14:01:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0
00:35:47.784   14:01:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:35:47.784   14:01:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable
00:35:47.784   14:01:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:35:47.784   14:01:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:35:47.784   14:01:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192
00:35:47.784   14:01:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:47.784   14:01:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:35:47.784  [2024-12-14 14:01:47.235514] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028540/0x7ff356fa4940) succeed.
00:35:47.784  [2024-12-14 14:01:47.244874] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000286c0/0x7ff356f60940) succeed.
00:35:47.784   14:01:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:47.784   14:01:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0
00:35:47.784   14:01:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:47.784   14:01:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:35:48.043  Malloc0
00:35:48.043   14:01:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:48.043   14:01:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:35:48.043   14:01:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:48.043   14:01:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:35:48.043   14:01:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:48.043   14:01:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:35:48.043   14:01:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:48.043   14:01:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:35:48.043   14:01:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:48.043   14:01:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420
00:35:48.043   14:01:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:48.043   14:01:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:35:48.043  [2024-12-14 14:01:47.555378] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 ***
00:35:48.043   14:01:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:48.043   14:01:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1
00:35:48.043    14:01:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json
00:35:48.043    14:01:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=()
00:35:48.043    14:01:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config
00:35:48.043    14:01:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:35:48.043    14:01:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:35:48.043  {
00:35:48.043    "params": {
00:35:48.043      "name": "Nvme$subsystem",
00:35:48.043      "trtype": "$TEST_TRANSPORT",
00:35:48.043      "traddr": "$NVMF_FIRST_TARGET_IP",
00:35:48.043      "adrfam": "ipv4",
00:35:48.043      "trsvcid": "$NVMF_PORT",
00:35:48.043      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:35:48.043      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:35:48.043      "hdgst": ${hdgst:-false},
00:35:48.043      "ddgst": ${ddgst:-false}
00:35:48.043    },
00:35:48.043    "method": "bdev_nvme_attach_controller"
00:35:48.043  }
00:35:48.043  EOF
00:35:48.043  )")
00:35:48.043     14:01:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat
00:35:48.043    14:01:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq .
00:35:48.043     14:01:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=,
00:35:48.043     14:01:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:35:48.043    "params": {
00:35:48.043      "name": "Nvme1",
00:35:48.043      "trtype": "rdma",
00:35:48.043      "traddr": "192.168.100.8",
00:35:48.043      "adrfam": "ipv4",
00:35:48.043      "trsvcid": "4420",
00:35:48.043      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:35:48.043      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:35:48.043      "hdgst": false,
00:35:48.043      "ddgst": false
00:35:48.043    },
00:35:48.043    "method": "bdev_nvme_attach_controller"
00:35:48.043  }'
00:35:48.043  [2024-12-14 14:01:47.644105] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:35:48.043  [2024-12-14 14:01:47.644192] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3539515 ]
00:35:48.043  [2024-12-14 14:01:47.779423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:35:48.302  [2024-12-14 14:01:47.888440] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:35:48.560  Running I/O for 1 seconds...
00:35:49.981      15366.00 IOPS,    60.02 MiB/s
00:35:49.981                                                                                                  Latency(us)
00:35:49.981  
[2024-12-14T13:01:49.719Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:35:49.981  Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:35:49.981  	 Verification LBA range: start 0x0 length 0x4000
00:35:49.981  	 Nvme1n1             :       1.01   15418.20      60.23       0.00     0.00    8255.15    3119.51   18350.08
00:35:49.981  
[2024-12-14T13:01:49.719Z]  ===================================================================================================================
00:35:49.981  
[2024-12-14T13:01:49.719Z]  Total                       :              15418.20      60.23       0.00     0.00    8255.15    3119.51   18350.08
00:35:50.548   14:01:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3539835
00:35:50.548   14:01:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3
00:35:50.548   14:01:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f
00:35:50.548    14:01:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json
00:35:50.548    14:01:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=()
00:35:50.548    14:01:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config
00:35:50.548    14:01:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:35:50.548    14:01:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:35:50.548  {
00:35:50.548    "params": {
00:35:50.548      "name": "Nvme$subsystem",
00:35:50.548      "trtype": "$TEST_TRANSPORT",
00:35:50.548      "traddr": "$NVMF_FIRST_TARGET_IP",
00:35:50.548      "adrfam": "ipv4",
00:35:50.548      "trsvcid": "$NVMF_PORT",
00:35:50.548      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:35:50.548      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:35:50.548      "hdgst": ${hdgst:-false},
00:35:50.548      "ddgst": ${ddgst:-false}
00:35:50.548    },
00:35:50.548    "method": "bdev_nvme_attach_controller"
00:35:50.548  }
00:35:50.548  EOF
00:35:50.548  )")
00:35:50.548     14:01:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat
00:35:50.548    14:01:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq .
00:35:50.548     14:01:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=,
00:35:50.548     14:01:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:35:50.548    "params": {
00:35:50.548      "name": "Nvme1",
00:35:50.548      "trtype": "rdma",
00:35:50.548      "traddr": "192.168.100.8",
00:35:50.548      "adrfam": "ipv4",
00:35:50.548      "trsvcid": "4420",
00:35:50.548      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:35:50.548      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:35:50.548      "hdgst": false,
00:35:50.548      "ddgst": false
00:35:50.548    },
00:35:50.548    "method": "bdev_nvme_attach_controller"
00:35:50.548  }'
00:35:50.807  [2024-12-14 14:01:50.286762] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:35:50.807  [2024-12-14 14:01:50.286849] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3539835 ]
00:35:50.807  [2024-12-14 14:01:50.421648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:35:50.807  [2024-12-14 14:01:50.525828] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:35:51.373  Running I/O for 15 seconds...
00:35:53.240      15428.00 IOPS,    60.27 MiB/s
[2024-12-14T13:01:53.236Z]     15552.00 IOPS,    60.75 MiB/s
[2024-12-14T13:01:53.236Z]  14:01:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3539263
00:35:53.498   14:01:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3
00:35:54.694      11777.33 IOPS,    46.01 MiB/s
[2024-12-14T13:01:54.432Z] [2024-12-14 14:01:54.269295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004307000 len:0x1000 key:0x185100
00:35:54.694  [2024-12-14 14:01:54.269350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.694  [2024-12-14 14:01:54.269386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004309000 len:0x1000 key:0x185100
00:35:54.694  [2024-12-14 14:01:54.269399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.694  [2024-12-14 14:01:54.269413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:20528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430b000 len:0x1000 key:0x185100
00:35:54.694  [2024-12-14 14:01:54.269425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.694  [2024-12-14 14:01:54.269439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:20536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430d000 len:0x1000 key:0x185100
00:35:54.694  [2024-12-14 14:01:54.269450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.694  [2024-12-14 14:01:54.269463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:20544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430f000 len:0x1000 key:0x185100
00:35:54.694  [2024-12-14 14:01:54.269475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.694  [2024-12-14 14:01:54.269489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004311000 len:0x1000 key:0x185100
00:35:54.694  [2024-12-14 14:01:54.269500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.694  [2024-12-14 14:01:54.269513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004313000 len:0x1000 key:0x185100
00:35:54.694  [2024-12-14 14:01:54.269525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.694  [2024-12-14 14:01:54.269538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:20568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004315000 len:0x1000 key:0x185100
00:35:54.694  [2024-12-14 14:01:54.269552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.694  [2024-12-14 14:01:54.269565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:20576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004317000 len:0x1000 key:0x185100
00:35:54.694  [2024-12-14 14:01:54.269576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.694  [2024-12-14 14:01:54.269590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004319000 len:0x1000 key:0x185100
00:35:54.694  [2024-12-14 14:01:54.269601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.694  [2024-12-14 14:01:54.269615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:20592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431b000 len:0x1000 key:0x185100
00:35:54.694  [2024-12-14 14:01:54.269626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.694  [2024-12-14 14:01:54.269639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431d000 len:0x1000 key:0x185100
00:35:54.694  [2024-12-14 14:01:54.269649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.694  [2024-12-14 14:01:54.269662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431f000 len:0x1000 key:0x185100
00:35:54.694  [2024-12-14 14:01:54.269673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.694  [2024-12-14 14:01:54.269686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004321000 len:0x1000 key:0x185100
00:35:54.694  [2024-12-14 14:01:54.269698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.694  [2024-12-14 14:01:54.269711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004323000 len:0x1000 key:0x185100
00:35:54.694  [2024-12-14 14:01:54.269722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.694  [2024-12-14 14:01:54.269736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:20632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004325000 len:0x1000 key:0x185100
00:35:54.694  [2024-12-14 14:01:54.269747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.694  [2024-12-14 14:01:54.269761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004327000 len:0x1000 key:0x185100
00:35:54.694  [2024-12-14 14:01:54.269772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.694  [2024-12-14 14:01:54.269785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004329000 len:0x1000 key:0x185100
00:35:54.694  [2024-12-14 14:01:54.269796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.694  [2024-12-14 14:01:54.269809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432b000 len:0x1000 key:0x185100
00:35:54.694  [2024-12-14 14:01:54.269819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.694  [2024-12-14 14:01:54.269834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432d000 len:0x1000 key:0x185100
00:35:54.695  [2024-12-14 14:01:54.269845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.695  [2024-12-14 14:01:54.269857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:20672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432f000 len:0x1000 key:0x185100
00:35:54.695  [2024-12-14 14:01:54.269868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.695  [2024-12-14 14:01:54.269881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004331000 len:0x1000 key:0x185100
00:35:54.695  [2024-12-14 14:01:54.269892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.695  [2024-12-14 14:01:54.269904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004333000 len:0x1000 key:0x185100
00:35:54.695  [2024-12-14 14:01:54.269915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.695  [2024-12-14 14:01:54.269933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004335000 len:0x1000 key:0x185100
00:35:54.695  [2024-12-14 14:01:54.269944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.695  [2024-12-14 14:01:54.269973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:20704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004337000 len:0x1000 key:0x185100
00:35:54.695  [2024-12-14 14:01:54.269985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.695  [2024-12-14 14:01:54.269998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004339000 len:0x1000 key:0x185100
00:35:54.695  [2024-12-14 14:01:54.270009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.695  [2024-12-14 14:01:54.270023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:20720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433b000 len:0x1000 key:0x185100
00:35:54.695  [2024-12-14 14:01:54.270034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.695  [2024-12-14 14:01:54.270048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:20728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433d000 len:0x1000 key:0x185100
00:35:54.695  [2024-12-14 14:01:54.270059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.695  [2024-12-14 14:01:54.270073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:20736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433f000 len:0x1000 key:0x185100
00:35:54.695  [2024-12-14 14:01:54.270084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.695  [2024-12-14 14:01:54.270098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004341000 len:0x1000 key:0x185100
00:35:54.695  [2024-12-14 14:01:54.270110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.695  [2024-12-14 14:01:54.270123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:20752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004343000 len:0x1000 key:0x185100
00:35:54.695  [2024-12-14 14:01:54.270136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.695  [2024-12-14 14:01:54.270149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004345000 len:0x1000 key:0x185100
00:35:54.695  [2024-12-14 14:01:54.270160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.695  [2024-12-14 14:01:54.270175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004347000 len:0x1000 key:0x185100
00:35:54.695  [2024-12-14 14:01:54.270186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.695  [2024-12-14 14:01:54.270200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004349000 len:0x1000 key:0x185100
00:35:54.695  [2024-12-14 14:01:54.270211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.695  [2024-12-14 14:01:54.270226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:20784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434b000 len:0x1000 key:0x185100
00:35:54.695  [2024-12-14 14:01:54.270238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.695  [2024-12-14 14:01:54.270252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434d000 len:0x1000 key:0x185100
00:35:54.695  [2024-12-14 14:01:54.270263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.695  [2024-12-14 14:01:54.270277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:20800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434f000 len:0x1000 key:0x185100
00:35:54.695  [2024-12-14 14:01:54.270289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.695  [2024-12-14 14:01:54.270302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004351000 len:0x1000 key:0x185100
00:35:54.695  [2024-12-14 14:01:54.270314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.695  [2024-12-14 14:01:54.270328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004353000 len:0x1000 key:0x185100
00:35:54.695  [2024-12-14 14:01:54.270339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.695  [2024-12-14 14:01:54.270353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004355000 len:0x1000 key:0x185100
00:35:54.695  [2024-12-14 14:01:54.270364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.695  [2024-12-14 14:01:54.270378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:20832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004357000 len:0x1000 key:0x185100
00:35:54.695  [2024-12-14 14:01:54.270389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.695  [2024-12-14 14:01:54.270402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:20840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004359000 len:0x1000 key:0x185100
00:35:54.695  [2024-12-14 14:01:54.270413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.695  [2024-12-14 14:01:54.270428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:20848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435b000 len:0x1000 key:0x185100
00:35:54.695  [2024-12-14 14:01:54.270439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.695  [2024-12-14 14:01:54.270453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:20856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435d000 len:0x1000 key:0x185100
00:35:54.695  [2024-12-14 14:01:54.270464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.695  [2024-12-14 14:01:54.270478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:20864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435f000 len:0x1000 key:0x185100
00:35:54.695  [2024-12-14 14:01:54.270490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.695  [2024-12-14 14:01:54.270503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004361000 len:0x1000 key:0x185100
00:35:54.695  [2024-12-14 14:01:54.270514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.695  [2024-12-14 14:01:54.270528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:20880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004363000 len:0x1000 key:0x185100
00:35:54.695  [2024-12-14 14:01:54.270539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.695  [2024-12-14 14:01:54.270553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:20888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004365000 len:0x1000 key:0x185100
00:35:54.695  [2024-12-14 14:01:54.270564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.695  [2024-12-14 14:01:54.270577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:20896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004367000 len:0x1000 key:0x185100
00:35:54.695  [2024-12-14 14:01:54.270589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.695  [2024-12-14 14:01:54.270602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004369000 len:0x1000 key:0x185100
00:35:54.695  [2024-12-14 14:01:54.270613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.695  [2024-12-14 14:01:54.270626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436b000 len:0x1000 key:0x185100
00:35:54.695  [2024-12-14 14:01:54.270637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.695  [2024-12-14 14:01:54.270651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436d000 len:0x1000 key:0x185100
00:35:54.695  [2024-12-14 14:01:54.270662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.695  [2024-12-14 14:01:54.270675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436f000 len:0x1000 key:0x185100
00:35:54.695  [2024-12-14 14:01:54.270686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.695  [2024-12-14 14:01:54.270700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:20936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004371000 len:0x1000 key:0x185100
00:35:54.695  [2024-12-14 14:01:54.270713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.695  [2024-12-14 14:01:54.270726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004373000 len:0x1000 key:0x185100
00:35:54.695  [2024-12-14 14:01:54.270737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.695  [2024-12-14 14:01:54.270751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:20952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004375000 len:0x1000 key:0x185100
00:35:54.695  [2024-12-14 14:01:54.270763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.695  [2024-12-14 14:01:54.270776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:20960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004377000 len:0x1000 key:0x185100
00:35:54.696  [2024-12-14 14:01:54.270787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.696  [2024-12-14 14:01:54.270801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004379000 len:0x1000 key:0x185100
00:35:54.696  [2024-12-14 14:01:54.270812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.696  [2024-12-14 14:01:54.270826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437b000 len:0x1000 key:0x185100
00:35:54.696  [2024-12-14 14:01:54.270837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.696  [2024-12-14 14:01:54.270851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:20984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437d000 len:0x1000 key:0x185100
00:35:54.696  [2024-12-14 14:01:54.270863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.696  [2024-12-14 14:01:54.270877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:20992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437f000 len:0x1000 key:0x185100
00:35:54.696  [2024-12-14 14:01:54.270888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.696  [2024-12-14 14:01:54.270908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004381000 len:0x1000 key:0x185100
00:35:54.696  [2024-12-14 14:01:54.270920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.696  [2024-12-14 14:01:54.270938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:21008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004383000 len:0x1000 key:0x185100
00:35:54.696  [2024-12-14 14:01:54.270950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.696  [2024-12-14 14:01:54.270963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004385000 len:0x1000 key:0x185100
00:35:54.696  [2024-12-14 14:01:54.270975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.696  [2024-12-14 14:01:54.270988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004387000 len:0x1000 key:0x185100
00:35:54.696  [2024-12-14 14:01:54.270999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.696  [2024-12-14 14:01:54.271014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:21032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004389000 len:0x1000 key:0x185100
00:35:54.696  [2024-12-14 14:01:54.271026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.696  [2024-12-14 14:01:54.271039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:21040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438b000 len:0x1000 key:0x185100
00:35:54.696  [2024-12-14 14:01:54.271051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.696  [2024-12-14 14:01:54.271064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438d000 len:0x1000 key:0x185100
00:35:54.696  [2024-12-14 14:01:54.271076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.696  [2024-12-14 14:01:54.271090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:21056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438f000 len:0x1000 key:0x185100
00:35:54.696  [2024-12-14 14:01:54.271101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.696  [2024-12-14 14:01:54.271115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004391000 len:0x1000 key:0x185100
00:35:54.696  [2024-12-14 14:01:54.271127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.696  [2024-12-14 14:01:54.271140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004393000 len:0x1000 key:0x185100
00:35:54.696  [2024-12-14 14:01:54.271151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.696  [2024-12-14 14:01:54.271165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:21080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004395000 len:0x1000 key:0x185100
00:35:54.696  [2024-12-14 14:01:54.271183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.696  [2024-12-14 14:01:54.271196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:21088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004397000 len:0x1000 key:0x185100
00:35:54.696  [2024-12-14 14:01:54.271207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.696  [2024-12-14 14:01:54.271221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004399000 len:0x1000 key:0x185100
00:35:54.696  [2024-12-14 14:01:54.271232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.696  [2024-12-14 14:01:54.271246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439b000 len:0x1000 key:0x185100
00:35:54.696  [2024-12-14 14:01:54.271257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.696  [2024-12-14 14:01:54.271270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439d000 len:0x1000 key:0x185100
00:35:54.696  [2024-12-14 14:01:54.271282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.696  [2024-12-14 14:01:54.271296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:21120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439f000 len:0x1000 key:0x185100
00:35:54.696  [2024-12-14 14:01:54.271309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.696  [2024-12-14 14:01:54.271322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a1000 len:0x1000 key:0x185100
00:35:54.696  [2024-12-14 14:01:54.271333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.696  [2024-12-14 14:01:54.271346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a3000 len:0x1000 key:0x185100
00:35:54.696  [2024-12-14 14:01:54.271357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.696  [2024-12-14 14:01:54.271370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a5000 len:0x1000 key:0x185100
00:35:54.696  [2024-12-14 14:01:54.271381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.696  [2024-12-14 14:01:54.271395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a7000 len:0x1000 key:0x185100
00:35:54.696  [2024-12-14 14:01:54.271406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.696  [2024-12-14 14:01:54.271420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a9000 len:0x1000 key:0x185100
00:35:54.696  [2024-12-14 14:01:54.271431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.696  [2024-12-14 14:01:54.271444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ab000 len:0x1000 key:0x185100
00:35:54.696  [2024-12-14 14:01:54.271455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.696  [2024-12-14 14:01:54.271469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ad000 len:0x1000 key:0x185100
00:35:54.696  [2024-12-14 14:01:54.271480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.696  [2024-12-14 14:01:54.271493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:21184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043af000 len:0x1000 key:0x185100
00:35:54.696  [2024-12-14 14:01:54.271504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.696  [2024-12-14 14:01:54.271517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b1000 len:0x1000 key:0x185100
00:35:54.696  [2024-12-14 14:01:54.271528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.696  [2024-12-14 14:01:54.271541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b3000 len:0x1000 key:0x185100
00:35:54.696  [2024-12-14 14:01:54.271552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.696  [2024-12-14 14:01:54.271565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b5000 len:0x1000 key:0x185100
00:35:54.696  [2024-12-14 14:01:54.271578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.696  [2024-12-14 14:01:54.271591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:21216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b7000 len:0x1000 key:0x185100
00:35:54.696  [2024-12-14 14:01:54.271604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.696  [2024-12-14 14:01:54.271617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b9000 len:0x1000 key:0x185100
00:35:54.696  [2024-12-14 14:01:54.271628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.696  [2024-12-14 14:01:54.271641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bb000 len:0x1000 key:0x185100
00:35:54.696  [2024-12-14 14:01:54.271653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.696  [2024-12-14 14:01:54.271666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:21240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bd000 len:0x1000 key:0x185100
00:35:54.696  [2024-12-14 14:01:54.271677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.696  [2024-12-14 14:01:54.271690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bf000 len:0x1000 key:0x185100
00:35:54.696  [2024-12-14 14:01:54.271702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.696  [2024-12-14 14:01:54.271716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:21256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c1000 len:0x1000 key:0x185100
00:35:54.696  [2024-12-14 14:01:54.271727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.696  [2024-12-14 14:01:54.271740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c3000 len:0x1000 key:0x185100
00:35:54.697  [2024-12-14 14:01:54.271752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.697  [2024-12-14 14:01:54.271765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:21272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c5000 len:0x1000 key:0x185100
00:35:54.697  [2024-12-14 14:01:54.271777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.697  [2024-12-14 14:01:54.271790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c7000 len:0x1000 key:0x185100
00:35:54.697  [2024-12-14 14:01:54.271801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.697  [2024-12-14 14:01:54.271814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:21288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c9000 len:0x1000 key:0x185100
00:35:54.697  [2024-12-14 14:01:54.271826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.697  [2024-12-14 14:01:54.271839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:21296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cb000 len:0x1000 key:0x185100
00:35:54.697  [2024-12-14 14:01:54.271851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.697  [2024-12-14 14:01:54.271864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cd000 len:0x1000 key:0x185100
00:35:54.697  [2024-12-14 14:01:54.271876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.697  [2024-12-14 14:01:54.271891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:21312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cf000 len:0x1000 key:0x185100
00:35:54.697  [2024-12-14 14:01:54.271903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.697  [2024-12-14 14:01:54.271916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d1000 len:0x1000 key:0x185100
00:35:54.697  [2024-12-14 14:01:54.271938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.697  [2024-12-14 14:01:54.271952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d3000 len:0x1000 key:0x185100
00:35:54.697  [2024-12-14 14:01:54.271964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.697  [2024-12-14 14:01:54.271977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:21336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d5000 len:0x1000 key:0x185100
00:35:54.697  [2024-12-14 14:01:54.271989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.697  [2024-12-14 14:01:54.272003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d7000 len:0x1000 key:0x185100
00:35:54.697  [2024-12-14 14:01:54.272015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.697  [2024-12-14 14:01:54.272029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d9000 len:0x1000 key:0x185100
00:35:54.697  [2024-12-14 14:01:54.272041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.697  [2024-12-14 14:01:54.272054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043db000 len:0x1000 key:0x185100
00:35:54.697  [2024-12-14 14:01:54.272066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.697  [2024-12-14 14:01:54.272079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043dd000 len:0x1000 key:0x185100
00:35:54.697  [2024-12-14 14:01:54.272090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.697  [2024-12-14 14:01:54.272103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:21376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043df000 len:0x1000 key:0x185100
00:35:54.697  [2024-12-14 14:01:54.272115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.697  [2024-12-14 14:01:54.272128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e1000 len:0x1000 key:0x185100
00:35:54.697  [2024-12-14 14:01:54.272139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.697  [2024-12-14 14:01:54.272152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:21392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e3000 len:0x1000 key:0x185100
00:35:54.697  [2024-12-14 14:01:54.272163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.697  [2024-12-14 14:01:54.272176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:21400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e5000 len:0x1000 key:0x185100
00:35:54.697  [2024-12-14 14:01:54.272189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.697  [2024-12-14 14:01:54.272202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e7000 len:0x1000 key:0x185100
00:35:54.697  [2024-12-14 14:01:54.272214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.697  [2024-12-14 14:01:54.272228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:21416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e9000 len:0x1000 key:0x185100
00:35:54.697  [2024-12-14 14:01:54.272239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.697  [2024-12-14 14:01:54.272252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:21424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043eb000 len:0x1000 key:0x185100
00:35:54.697  [2024-12-14 14:01:54.272264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.697  [2024-12-14 14:01:54.272277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:21432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ed000 len:0x1000 key:0x185100
00:35:54.697  [2024-12-14 14:01:54.272288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.697  [2024-12-14 14:01:54.272303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ef000 len:0x1000 key:0x185100
00:35:54.697  [2024-12-14 14:01:54.272314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.697  [2024-12-14 14:01:54.272327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f1000 len:0x1000 key:0x185100
00:35:54.697  [2024-12-14 14:01:54.272338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.697  [2024-12-14 14:01:54.272352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f3000 len:0x1000 key:0x185100
00:35:54.697  [2024-12-14 14:01:54.272363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.697  [2024-12-14 14:01:54.272376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:21464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f5000 len:0x1000 key:0x185100
00:35:54.697  [2024-12-14 14:01:54.272387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.697  [2024-12-14 14:01:54.272401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f7000 len:0x1000 key:0x185100
00:35:54.697  [2024-12-14 14:01:54.272412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.697  [2024-12-14 14:01:54.272425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f9000 len:0x1000 key:0x185100
00:35:54.697  [2024-12-14 14:01:54.272436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.697  [2024-12-14 14:01:54.272450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fb000 len:0x1000 key:0x185100
00:35:54.697  [2024-12-14 14:01:54.272460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.697  [2024-12-14 14:01:54.272478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fd000 len:0x1000 key:0x185100
00:35:54.697  [2024-12-14 14:01:54.272489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.697  [2024-12-14 14:01:54.272503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:21504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:35:54.697  [2024-12-14 14:01:54.272514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.697  [2024-12-14 14:01:54.272535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:21512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:35:54.697  [2024-12-14 14:01:54.272546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.697  [2024-12-14 14:01:54.281249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:35:54.697  [2024-12-14 14:01:54.281268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.697  [2024-12-14 14:01:54.283546] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:35:54.697  [2024-12-14 14:01:54.283574] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:35:54.697  [2024-12-14 14:01:54.283591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21528 len:8 PRP1 0x0 PRP2 0x0
00:35:54.697  [2024-12-14 14:01:54.283609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.697  [2024-12-14 14:01:54.283878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:35:54.697  [2024-12-14 14:01:54.283898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.697  [2024-12-14 14:01:54.283916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:35:54.697  [2024-12-14 14:01:54.283938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.697  [2024-12-14 14:01:54.283954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:35:54.697  [2024-12-14 14:01:54.283970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.697  [2024-12-14 14:01:54.283986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 
00:35:54.697  [2024-12-14 14:01:54.284001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:35:54.697  [2024-12-14 14:01:54.315038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0
00:35:54.698  [2024-12-14 14:01:54.315124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:35:54.698  [2024-12-14 14:01:54.315171] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Unable to perform failover, already in progress.
00:35:54.698  [2024-12-14 14:01:54.318185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:35:54.698  [2024-12-14 14:01:54.322349] nvme_rdma.c: 570:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8)
00:35:54.698  [2024-12-14 14:01:54.322382] nvme_rdma.c:1111:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74
00:35:54.698  [2024-12-14 14:01:54.322402] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000105ff800
00:35:55.830       8833.00 IOPS,    34.50 MiB/s
[2024-12-14T13:01:55.568Z] [2024-12-14 14:01:55.326641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0
00:35:55.830  [2024-12-14 14:01:55.326674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:35:55.830  [2024-12-14 14:01:55.326874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:35:55.830  [2024-12-14 14:01:55.326893] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:35:55.830  [2024-12-14 14:01:55.326906] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state
00:35:55.830  [2024-12-14 14:01:55.326922] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:35:55.830  [2024-12-14 14:01:55.332677] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:35:55.830  [2024-12-14 14:01:55.335870] nvme_rdma.c: 570:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8)
00:35:55.830  [2024-12-14 14:01:55.335896] nvme_rdma.c:1111:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74
00:35:55.830  [2024-12-14 14:01:55.335907] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000105ff800
00:35:56.654       7066.40 IOPS,    27.60 MiB/s
[2024-12-14T13:01:56.392Z] /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3539263 Killed                  "${NVMF_APP[@]}" "$@"
00:35:56.654   14:01:56 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init
00:35:56.654   14:01:56 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE
00:35:56.654   14:01:56 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:35:56.654   14:01:56 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable
00:35:56.654   14:01:56 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:35:56.654   14:01:56 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3540895
00:35:56.654   14:01:56 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3540895
00:35:56.655   14:01:56 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE
00:35:56.655   14:01:56 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3540895 ']'
00:35:56.655   14:01:56 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:35:56.655   14:01:56 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100
00:35:56.655   14:01:56 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:35:56.655  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:35:56.655   14:01:56 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable
00:35:56.655   14:01:56 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:35:56.655  [2024-12-14 14:01:56.308714] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:35:56.655  [2024-12-14 14:01:56.308810] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:35:56.655  [2024-12-14 14:01:56.340216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0
00:35:56.655  [2024-12-14 14:01:56.340258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:35:56.655  [2024-12-14 14:01:56.340462] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:35:56.655  [2024-12-14 14:01:56.340480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:35:56.655  [2024-12-14 14:01:56.340494] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state
00:35:56.655  [2024-12-14 14:01:56.340511] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:35:56.655  [2024-12-14 14:01:56.346817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:35:56.655  [2024-12-14 14:01:56.350034] nvme_rdma.c: 570:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8)
00:35:56.655  [2024-12-14 14:01:56.350061] nvme_rdma.c:1111:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74
00:35:56.655  [2024-12-14 14:01:56.350075] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000105ff800
00:35:56.913  [2024-12-14 14:01:56.451213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:35:56.913  [2024-12-14 14:01:56.555147] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:35:56.913  [2024-12-14 14:01:56.555196] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:35:56.913  [2024-12-14 14:01:56.555210] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:35:56.913  [2024-12-14 14:01:56.555223] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:35:56.913  [2024-12-14 14:01:56.555232] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:35:56.913  [2024-12-14 14:01:56.557420] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:35:56.913  [2024-12-14 14:01:56.557481] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:35:56.913  [2024-12-14 14:01:56.557501] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:35:57.479       5888.67 IOPS,    23.00 MiB/s
[2024-12-14T13:01:57.217Z]  14:01:57 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:35:57.479   14:01:57 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0
00:35:57.479   14:01:57 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:35:57.479   14:01:57 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable
00:35:57.479   14:01:57 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:35:57.479   14:01:57 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:35:57.479   14:01:57 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192
00:35:57.479   14:01:57 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:57.479   14:01:57 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:35:57.479  [2024-12-14 14:01:57.182909] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028540/0x7f75ed78b940) succeed.
00:35:57.479  [2024-12-14 14:01:57.192272] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000286c0/0x7f75ed747940) succeed.
00:35:57.737  [2024-12-14 14:01:57.354275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0
00:35:57.738  [2024-12-14 14:01:57.354327] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:35:57.738  [2024-12-14 14:01:57.354532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:35:57.738  [2024-12-14 14:01:57.354547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:35:57.738  [2024-12-14 14:01:57.354561] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state
00:35:57.738  [2024-12-14 14:01:57.354581] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:35:57.738  [2024-12-14 14:01:57.363697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:35:57.738  [2024-12-14 14:01:57.366943] nvme_rdma.c: 570:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8)
00:35:57.738  [2024-12-14 14:01:57.366971] nvme_rdma.c:1111:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74
00:35:57.738  [2024-12-14 14:01:57.366982] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000105ff800
00:35:57.738   14:01:57 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:57.738   14:01:57 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0
00:35:57.738   14:01:57 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:57.738   14:01:57 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:35:57.738  Malloc0
00:35:57.738   14:01:57 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:57.738   14:01:57 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:35:57.738   14:01:57 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:57.738   14:01:57 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:35:57.996   14:01:57 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:57.996   14:01:57 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:35:57.996   14:01:57 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:57.996   14:01:57 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:35:57.996   14:01:57 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:57.996   14:01:57 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420
00:35:57.996   14:01:57 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:57.996   14:01:57 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:35:57.996  [2024-12-14 14:01:57.493118] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 ***
00:35:57.996   14:01:57 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:57.996   14:01:57 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3539835
00:35:58.820       5047.43 IOPS,    19.72 MiB/s
[2024-12-14T13:01:58.558Z] [2024-12-14 14:01:58.371383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0
00:35:58.820  [2024-12-14 14:01:58.371421] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:35:58.820  [2024-12-14 14:01:58.371618] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:35:58.820  [2024-12-14 14:01:58.371633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:35:58.820  [2024-12-14 14:01:58.371647] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state
00:35:58.820  [2024-12-14 14:01:58.371664] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:35:58.820  [2024-12-14 14:01:58.380031] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:35:58.820  [2024-12-14 14:01:58.424577] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful.
00:36:00.321       5438.00 IOPS,    21.24 MiB/s
[2024-12-14T13:02:00.993Z]      6575.78 IOPS,    25.69 MiB/s
[2024-12-14T13:02:02.368Z]      7478.90 IOPS,    29.21 MiB/s
[2024-12-14T13:02:03.303Z]      8225.82 IOPS,    32.13 MiB/s
[2024-12-14T13:02:04.238Z]      8844.33 IOPS,    34.55 MiB/s
[2024-12-14T13:02:05.173Z]      9368.92 IOPS,    36.60 MiB/s
[2024-12-14T13:02:06.106Z]      9819.71 IOPS,    38.36 MiB/s
[2024-12-14T13:02:06.106Z]     10211.47 IOPS,    39.89 MiB/s
00:36:06.368                                                                                                  Latency(us)
00:36:06.368  
[2024-12-14T13:02:06.106Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:36:06.368  Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:36:06.368  	 Verification LBA range: start 0x0 length 0x4000
00:36:06.368  	 Nvme1n1             :      15.01   10210.71      39.89   12550.80     0.00    5602.88     737.28 1114007.14
00:36:06.368  
[2024-12-14T13:02:06.106Z]  ===================================================================================================================
00:36:06.368  
[2024-12-14T13:02:06.106Z]  Total                       :              10210.71      39.89   12550.80     0.00    5602.88     737.28 1114007.14
00:36:07.302   14:02:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync
00:36:07.302   14:02:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:36:07.302   14:02:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:07.302   14:02:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:36:07.302   14:02:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:07.302   14:02:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT
00:36:07.302   14:02:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini
00:36:07.302   14:02:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup
00:36:07.302   14:02:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync
00:36:07.302   14:02:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' rdma == tcp ']'
00:36:07.302   14:02:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' rdma == rdma ']'
00:36:07.302   14:02:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e
00:36:07.302   14:02:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20}
00:36:07.302   14:02:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma
00:36:07.302  rmmod nvme_rdma
00:36:07.302  rmmod nvme_fabrics
00:36:07.302   14:02:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:36:07.302   14:02:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e
00:36:07.302   14:02:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0
00:36:07.302   14:02:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 3540895 ']'
00:36:07.302   14:02:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 3540895
00:36:07.302   14:02:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 3540895 ']'
00:36:07.302   14:02:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 3540895
00:36:07.302    14:02:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname
00:36:07.302   14:02:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:36:07.302    14:02:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3540895
00:36:07.561   14:02:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:36:07.561   14:02:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:36:07.561   14:02:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3540895'
00:36:07.561  killing process with pid 3540895
00:36:07.561   14:02:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 3540895
00:36:07.561   14:02:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 3540895
00:36:09.465   14:02:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:36:09.465   14:02:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]]
00:36:09.465  
00:36:09.465  real	0m29.735s
00:36:09.465  user	1m16.186s
00:36:09.465  sys	0m7.281s
00:36:09.465   14:02:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:36:09.465   14:02:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:36:09.465  ************************************
00:36:09.465  END TEST nvmf_bdevperf
00:36:09.465  ************************************
00:36:09.465   14:02:08 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma
00:36:09.465   14:02:08 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:36:09.465   14:02:08 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable
00:36:09.465   14:02:08 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:36:09.465  ************************************
00:36:09.465  START TEST nvmf_target_disconnect
00:36:09.465  ************************************
00:36:09.465   14:02:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma
00:36:09.465  * Looking for test storage...
00:36:09.465  * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host
00:36:09.465    14:02:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:36:09.465     14:02:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version
00:36:09.465     14:02:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:36:09.465    14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:36:09.465    14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:36:09.465    14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l
00:36:09.465    14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l
00:36:09.465    14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-:
00:36:09.465    14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1
00:36:09.465    14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-:
00:36:09.465    14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2
00:36:09.465    14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<'
00:36:09.465    14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2
00:36:09.465    14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1
00:36:09.465    14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:36:09.465    14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in
00:36:09.465    14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1
00:36:09.465    14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 ))
00:36:09.465    14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:36:09.465     14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1
00:36:09.465     14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1
00:36:09.465     14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:36:09.465     14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1
00:36:09.465    14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1
00:36:09.465     14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2
00:36:09.465     14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2
00:36:09.465     14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:36:09.465     14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2
00:36:09.465    14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2
00:36:09.465    14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:36:09.465    14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:36:09.465    14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0
00:36:09.465    14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:36:09.465    14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:36:09.465  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:36:09.465  		--rc genhtml_branch_coverage=1
00:36:09.465  		--rc genhtml_function_coverage=1
00:36:09.465  		--rc genhtml_legend=1
00:36:09.465  		--rc geninfo_all_blocks=1
00:36:09.465  		--rc geninfo_unexecuted_blocks=1
00:36:09.465  		
00:36:09.465  		'
00:36:09.465    14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:36:09.465  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:36:09.465  		--rc genhtml_branch_coverage=1
00:36:09.465  		--rc genhtml_function_coverage=1
00:36:09.465  		--rc genhtml_legend=1
00:36:09.465  		--rc geninfo_all_blocks=1
00:36:09.465  		--rc geninfo_unexecuted_blocks=1
00:36:09.465  		
00:36:09.465  		'
00:36:09.465    14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:36:09.465  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:36:09.465  		--rc genhtml_branch_coverage=1
00:36:09.465  		--rc genhtml_function_coverage=1
00:36:09.465  		--rc genhtml_legend=1
00:36:09.465  		--rc geninfo_all_blocks=1
00:36:09.465  		--rc geninfo_unexecuted_blocks=1
00:36:09.465  		
00:36:09.465  		'
00:36:09.465    14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:36:09.465  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:36:09.465  		--rc genhtml_branch_coverage=1
00:36:09.465  		--rc genhtml_function_coverage=1
00:36:09.465  		--rc genhtml_legend=1
00:36:09.465  		--rc geninfo_all_blocks=1
00:36:09.465  		--rc geninfo_unexecuted_blocks=1
00:36:09.465  		
00:36:09.465  		'
00:36:09.465   14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh
00:36:09.465     14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s
00:36:09.465    14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:36:09.465    14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:36:09.465    14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:36:09.465    14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:36:09.465    14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:36:09.465    14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:36:09.465    14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:36:09.465    14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:36:09.465    14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:36:09.465     14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:36:09.465    14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:36:09.465    14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e
00:36:09.466    14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:36:09.466    14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:36:09.466    14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:36:09.466    14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:36:09.466    14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh
00:36:09.466     14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob
00:36:09.466     14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:36:09.466     14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:36:09.466     14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:36:09.466      14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:36:09.466      14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:36:09.466      14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:36:09.466      14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH
00:36:09.466      14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:36:09.466    14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0
00:36:09.466    14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:36:09.466    14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:36:09.466    14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:36:09.466    14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:36:09.466    14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:36:09.466    14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:36:09.466  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:36:09.466    14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:36:09.466    14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:36:09.466    14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0
00:36:09.466   14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme
00:36:09.466   14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64
00:36:09.466   14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512
00:36:09.466   14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit
00:36:09.466   14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z rdma ']'
00:36:09.466   14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:36:09.466   14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs
00:36:09.466   14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no
00:36:09.466   14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns
00:36:09.466   14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:36:09.466   14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:36:09.466    14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:36:09.466   14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:36:09.466   14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:36:09.466   14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable
00:36:09.466   14:02:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x
00:36:16.028   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:36:16.028   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=()
00:36:16.028   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs
00:36:16.028   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=()
00:36:16.028   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:36:16.028   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=()
00:36:16.028   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers
00:36:16.028   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=()
00:36:16.028   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs
00:36:16.028   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=()
00:36:16.028   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810
00:36:16.028   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=()
00:36:16.028   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722
00:36:16.028   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=()
00:36:16.028   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx
00:36:16.028   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:36:16.028   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:36:16.028   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:36:16.028   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:36:16.028   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:36:16.028   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:36:16.028   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:36:16.028   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:36:16.028   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:36:16.028   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:36:16.028   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ rdma == rdma ]]
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}")
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}")
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]]
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}")
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)'
00:36:16.029  Found 0000:d9:00.0 (0x15b3 - 0x1015)
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)'
00:36:16.029  Found 0000:d9:00.1 (0x15b3 - 0x1015)
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]]
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0'
00:36:16.029  Found net devices under 0000:d9:00.0: mlx_0_0
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1'
00:36:16.029  Found net devices under 0000:d9:00.1: mlx_0_1
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ rdma == tcp ]]
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@447 -- # [[ rdma == rdma ]]
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # rdma_device_init
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@529 -- # load_ib_rdma_modules
00:36:16.029    14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@62 -- # uname
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']'
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@66 -- # modprobe ib_cm
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@67 -- # modprobe ib_core
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@68 -- # modprobe ib_umad
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@69 -- # modprobe ib_uverbs
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@70 -- # modprobe iw_cm
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@71 -- # modprobe rdma_cm
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@72 -- # modprobe rdma_ucm
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@530 -- # allocate_nic_ips
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR ))
00:36:16.029    14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # get_rdma_if_list
00:36:16.029    14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:36:16.029    14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:36:16.029     14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:36:16.029     14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:36:16.029    14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:36:16.029    14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:36:16.029    14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:36:16.029    14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:36:16.029    14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0
00:36:16.029    14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2
00:36:16.029    14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:36:16.029    14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:36:16.029    14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:36:16.029    14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:36:16.029    14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:36:16.029    14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1
00:36:16.029    14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:36:16.029    14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0
00:36:16.029    14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:36:16.029    14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:36:16.029    14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}'
00:36:16.029    14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.8
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]]
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_0
00:36:16.029  6: mlx_0_0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:36:16.029      link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff
00:36:16.029      altname enp217s0f0np0
00:36:16.029      altname ens818f0np0
00:36:16.029      inet 192.168.100.8/24 scope global mlx_0_0
00:36:16.029         valid_lft forever preferred_lft forever
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:36:16.029    14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1
00:36:16.029    14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:36:16.029    14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}'
00:36:16.029    14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:36:16.029    14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.9
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]]
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_1
00:36:16.029  7: mlx_0_1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:36:16.029      link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff
00:36:16.029      altname enp217s0f1np1
00:36:16.029      altname ens818f1np1
00:36:16.029      inet 192.168.100.9/24 scope global mlx_0_1
00:36:16.029         valid_lft forever preferred_lft forever
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:36:16.029   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma'
00:36:16.030   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]]
00:36:16.030    14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@484 -- # get_available_rdma_ips
00:36:16.030     14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # get_rdma_if_list
00:36:16.030     14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:36:16.030     14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:36:16.030      14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:36:16.030      14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:36:16.289     14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:36:16.289     14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:36:16.289     14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:36:16.289     14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:36:16.289     14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0
00:36:16.289     14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2
00:36:16.289     14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:36:16.289     14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:36:16.289     14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:36:16.289     14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:36:16.289     14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:36:16.289     14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1
00:36:16.289     14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2
00:36:16.289    14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:36:16.289    14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0
00:36:16.289    14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:36:16.289    14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:36:16.289    14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1
00:36:16.289    14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}'
00:36:16.289    14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:36:16.289    14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1
00:36:16.289    14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:36:16.289    14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:36:16.289    14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}'
00:36:16.289    14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1
00:36:16.289   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8
00:36:16.289  192.168.100.9'
00:36:16.289    14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@485 -- # echo '192.168.100.8
00:36:16.289  192.168.100.9'
00:36:16.289    14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@485 -- # head -n 1
00:36:16.289   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8
00:36:16.289    14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # echo '192.168.100.8
00:36:16.289  192.168.100.9'
00:36:16.289    14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # tail -n +2
00:36:16.289    14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # head -n 1
00:36:16.289   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9
00:36:16.289   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']'
00:36:16.289   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024'
00:36:16.289   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' rdma == tcp ']'
00:36:16.289   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' rdma == rdma ']'
00:36:16.289   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-rdma
00:36:16.289   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1
00:36:16.289   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:36:16.289   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable
00:36:16.289   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x
00:36:16.289  ************************************
00:36:16.289  START TEST nvmf_target_disconnect_tc1
00:36:16.289  ************************************
00:36:16.289   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1
00:36:16.289   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420'
00:36:16.289   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0
00:36:16.289   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420'
00:36:16.289   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect
00:36:16.289   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:36:16.289    14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect
00:36:16.289   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:36:16.289    14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect
00:36:16.289   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:36:16.289   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect
00:36:16.289   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect ]]
00:36:16.289   14:02:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420'
00:36:16.546  [2024-12-14 14:02:16.110873] nvme_rdma.c: 570:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8)
00:36:16.546  [2024-12-14 14:02:16.110952] nvme_rdma.c:1111:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74
00:36:16.546  [2024-12-14 14:02:16.110975] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d6ec0
00:36:17.478  [2024-12-14 14:02:17.115235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] CQ transport error -6 (No such device or address) on qpair id 0
00:36:17.479  [2024-12-14 14:02:17.115282] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] in failed state.
00:36:17.479  [2024-12-14 14:02:17.115314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] Ctrlr is in error state
00:36:17.479  [2024-12-14 14:02:17.115376] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed
00:36:17.479  [2024-12-14 14:02:17.115392] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed
00:36:17.479  spdk_nvme_probe() failed for transport address '192.168.100.8'
00:36:17.479  /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect: errors occurred
00:36:17.479  Initializing NVMe Controllers
00:36:17.479   14:02:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1
00:36:17.479   14:02:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:36:17.479   14:02:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:36:17.479   14:02:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:36:17.479  
00:36:17.479  real	0m1.316s
00:36:17.479  user	0m0.930s
00:36:17.479  sys	0m0.372s
00:36:17.479   14:02:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable
00:36:17.737   14:02:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x
00:36:17.737  ************************************
00:36:17.737  END TEST nvmf_target_disconnect_tc1
00:36:17.737  ************************************
00:36:17.737   14:02:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2
00:36:17.737   14:02:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:36:17.737   14:02:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable
00:36:17.737   14:02:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x
00:36:17.737  ************************************
00:36:17.737  START TEST nvmf_target_disconnect_tc2
00:36:17.737  ************************************
00:36:17.737   14:02:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2
00:36:17.737   14:02:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 192.168.100.8
00:36:17.737   14:02:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0
00:36:17.737   14:02:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:36:17.737   14:02:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable
00:36:17.737   14:02:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x
00:36:17.737   14:02:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3546492
00:36:17.737   14:02:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3546492
00:36:17.737   14:02:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0
00:36:17.737   14:02:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3546492 ']'
00:36:17.737   14:02:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:36:17.737   14:02:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100
00:36:17.737   14:02:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:36:17.737  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:36:17.737   14:02:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable
00:36:17.737   14:02:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x
00:36:17.737  [2024-12-14 14:02:17.397311] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:36:17.737  [2024-12-14 14:02:17.397428] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:36:17.995  [2024-12-14 14:02:17.536100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:36:17.995  [2024-12-14 14:02:17.637105] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:36:17.995  [2024-12-14 14:02:17.637155] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:36:17.995  [2024-12-14 14:02:17.637166] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:36:17.995  [2024-12-14 14:02:17.637178] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:36:17.995  [2024-12-14 14:02:17.637188] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:36:17.995  [2024-12-14 14:02:17.639517] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5
00:36:17.995  [2024-12-14 14:02:17.639608] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6
00:36:17.995  [2024-12-14 14:02:17.639678] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4
00:36:17.995  [2024-12-14 14:02:17.639704] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7
00:36:18.562   14:02:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:36:18.562   14:02:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0
00:36:18.562   14:02:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:36:18.562   14:02:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable
00:36:18.562   14:02:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x
00:36:18.562   14:02:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:36:18.562   14:02:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0
00:36:18.562   14:02:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:18.562   14:02:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x
00:36:18.820  Malloc0
00:36:18.820   14:02:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:18.820   14:02:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024
00:36:18.820   14:02:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:18.820   14:02:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x
00:36:18.820  [2024-12-14 14:02:18.379310] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028b40/0x7fc6d4fbd940) succeed.
00:36:18.820  [2024-12-14 14:02:18.388987] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028cc0/0x7fc6d4f79940) succeed.
00:36:19.078   14:02:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:19.078   14:02:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:36:19.078   14:02:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:19.078   14:02:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x
00:36:19.078   14:02:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:19.078   14:02:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:36:19.078   14:02:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:19.078   14:02:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x
00:36:19.078   14:02:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:19.078   14:02:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420
00:36:19.078   14:02:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:19.078   14:02:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x
00:36:19.078  [2024-12-14 14:02:18.671997] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 ***
00:36:19.078   14:02:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:19.078   14:02:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420
00:36:19.078   14:02:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:19.078   14:02:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x
00:36:19.078   14:02:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:19.078   14:02:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3546780
00:36:19.078   14:02:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2
00:36:19.078   14:02:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420'
00:36:20.981   14:02:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3546492
00:36:20.981   14:02:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2
00:36:22.355  Read completed with error (sct=0, sc=8)
00:36:22.355  starting I/O failed
00:36:22.355  Read completed with error (sct=0, sc=8)
00:36:22.355  starting I/O failed
00:36:22.355  Write completed with error (sct=0, sc=8)
00:36:22.355  starting I/O failed
00:36:22.355  Read completed with error (sct=0, sc=8)
00:36:22.355  starting I/O failed
00:36:22.355  Write completed with error (sct=0, sc=8)
00:36:22.355  starting I/O failed
00:36:22.355  Read completed with error (sct=0, sc=8)
00:36:22.355  starting I/O failed
00:36:22.355  Write completed with error (sct=0, sc=8)
00:36:22.355  starting I/O failed
00:36:22.355  Write completed with error (sct=0, sc=8)
00:36:22.355  starting I/O failed
00:36:22.355  Read completed with error (sct=0, sc=8)
00:36:22.355  starting I/O failed
00:36:22.355  Read completed with error (sct=0, sc=8)
00:36:22.355  starting I/O failed
00:36:22.355  Read completed with error (sct=0, sc=8)
00:36:22.355  starting I/O failed
00:36:22.355  Write completed with error (sct=0, sc=8)
00:36:22.355  starting I/O failed
00:36:22.355  Read completed with error (sct=0, sc=8)
00:36:22.355  starting I/O failed
00:36:22.355  Read completed with error (sct=0, sc=8)
00:36:22.355  starting I/O failed
00:36:22.355  Read completed with error (sct=0, sc=8)
00:36:22.355  starting I/O failed
00:36:22.355  Write completed with error (sct=0, sc=8)
00:36:22.355  starting I/O failed
00:36:22.355  Read completed with error (sct=0, sc=8)
00:36:22.355  starting I/O failed
00:36:22.355  Write completed with error (sct=0, sc=8)
00:36:22.355  starting I/O failed
00:36:22.355  Write completed with error (sct=0, sc=8)
00:36:22.355  starting I/O failed
00:36:22.355  Read completed with error (sct=0, sc=8)
00:36:22.355  starting I/O failed
00:36:22.355  Write completed with error (sct=0, sc=8)
00:36:22.355  starting I/O failed
00:36:22.355  Write completed with error (sct=0, sc=8)
00:36:22.355  starting I/O failed
00:36:22.355  Write completed with error (sct=0, sc=8)
00:36:22.355  starting I/O failed
00:36:22.355  Write completed with error (sct=0, sc=8)
00:36:22.355  starting I/O failed
00:36:22.355  Write completed with error (sct=0, sc=8)
00:36:22.355  starting I/O failed
00:36:22.355  Write completed with error (sct=0, sc=8)
00:36:22.355  starting I/O failed
00:36:22.355  Read completed with error (sct=0, sc=8)
00:36:22.355  starting I/O failed
00:36:22.355  Write completed with error (sct=0, sc=8)
00:36:22.355  starting I/O failed
00:36:22.355  Read completed with error (sct=0, sc=8)
00:36:22.355  starting I/O failed
00:36:22.355  Read completed with error (sct=0, sc=8)
00:36:22.355  starting I/O failed
00:36:22.355  Write completed with error (sct=0, sc=8)
00:36:22.355  starting I/O failed
00:36:22.355  Read completed with error (sct=0, sc=8)
00:36:22.355  starting I/O failed
00:36:22.355  [2024-12-14 14:02:21.963904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1
00:36:23.292  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3546492 Killed                  "${NVMF_APP[@]}" "$@"
00:36:23.292   14:02:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 192.168.100.8
00:36:23.292   14:02:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0
00:36:23.292   14:02:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:36:23.292   14:02:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable
00:36:23.292   14:02:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x
00:36:23.292   14:02:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3547326
00:36:23.292   14:02:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3547326
00:36:23.292   14:02:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0
00:36:23.292   14:02:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3547326 ']'
00:36:23.292   14:02:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:36:23.292   14:02:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100
00:36:23.292   14:02:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:36:23.292  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:36:23.292   14:02:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable
00:36:23.292   14:02:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x
00:36:23.292  [2024-12-14 14:02:22.795450] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:36:23.292  [2024-12-14 14:02:22.795565] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:36:23.292  [2024-12-14 14:02:22.955848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:36:23.292  Read completed with error (sct=0, sc=8)
00:36:23.292  starting I/O failed
00:36:23.292  Read completed with error (sct=0, sc=8)
00:36:23.292  starting I/O failed
00:36:23.292  Write completed with error (sct=0, sc=8)
00:36:23.292  starting I/O failed
00:36:23.292  Read completed with error (sct=0, sc=8)
00:36:23.292  starting I/O failed
00:36:23.292  Write completed with error (sct=0, sc=8)
00:36:23.292  starting I/O failed
00:36:23.292  Write completed with error (sct=0, sc=8)
00:36:23.292  starting I/O failed
00:36:23.293  Write completed with error (sct=0, sc=8)
00:36:23.293  starting I/O failed
00:36:23.293  Read completed with error (sct=0, sc=8)
00:36:23.293  starting I/O failed
00:36:23.293  Write completed with error (sct=0, sc=8)
00:36:23.293  starting I/O failed
00:36:23.293  Read completed with error (sct=0, sc=8)
00:36:23.293  starting I/O failed
00:36:23.293  Write completed with error (sct=0, sc=8)
00:36:23.293  starting I/O failed
00:36:23.293  Read completed with error (sct=0, sc=8)
00:36:23.293  starting I/O failed
00:36:23.293  Write completed with error (sct=0, sc=8)
00:36:23.293  starting I/O failed
00:36:23.293  Read completed with error (sct=0, sc=8)
00:36:23.293  starting I/O failed
00:36:23.293  Write completed with error (sct=0, sc=8)
00:36:23.293  starting I/O failed
00:36:23.293  Write completed with error (sct=0, sc=8)
00:36:23.293  starting I/O failed
00:36:23.293  Read completed with error (sct=0, sc=8)
00:36:23.293  starting I/O failed
00:36:23.293  Read completed with error (sct=0, sc=8)
00:36:23.293  starting I/O failed
00:36:23.293  Write completed with error (sct=0, sc=8)
00:36:23.293  starting I/O failed
00:36:23.293  Read completed with error (sct=0, sc=8)
00:36:23.293  starting I/O failed
00:36:23.293  Read completed with error (sct=0, sc=8)
00:36:23.293  starting I/O failed
00:36:23.293  Write completed with error (sct=0, sc=8)
00:36:23.293  starting I/O failed
00:36:23.293  Write completed with error (sct=0, sc=8)
00:36:23.293  starting I/O failed
00:36:23.293  Write completed with error (sct=0, sc=8)
00:36:23.293  starting I/O failed
00:36:23.293  Read completed with error (sct=0, sc=8)
00:36:23.293  starting I/O failed
00:36:23.293  Read completed with error (sct=0, sc=8)
00:36:23.293  starting I/O failed
00:36:23.293  Read completed with error (sct=0, sc=8)
00:36:23.293  starting I/O failed
00:36:23.293  Write completed with error (sct=0, sc=8)
00:36:23.293  starting I/O failed
00:36:23.293  Write completed with error (sct=0, sc=8)
00:36:23.293  starting I/O failed
00:36:23.293  Read completed with error (sct=0, sc=8)
00:36:23.293  starting I/O failed
00:36:23.293  Write completed with error (sct=0, sc=8)
00:36:23.293  starting I/O failed
00:36:23.293  Read completed with error (sct=0, sc=8)
00:36:23.293  starting I/O failed
00:36:23.293  [2024-12-14 14:02:22.969605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:36:23.552  [2024-12-14 14:02:23.061244] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:36:23.552  [2024-12-14 14:02:23.061288] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:36:23.552  [2024-12-14 14:02:23.061300] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:36:23.552  [2024-12-14 14:02:23.061313] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:36:23.552  [2024-12-14 14:02:23.061323] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:36:23.552  [2024-12-14 14:02:23.064078] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5
00:36:23.552  [2024-12-14 14:02:23.064170] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6
00:36:23.552  [2024-12-14 14:02:23.064267] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4
00:36:23.552  [2024-12-14 14:02:23.064292] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7
00:36:24.120   14:02:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:36:24.120   14:02:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0
00:36:24.120   14:02:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:36:24.120   14:02:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable
00:36:24.120   14:02:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x
00:36:24.120   14:02:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:36:24.120   14:02:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0
00:36:24.120   14:02:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:24.120   14:02:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x
00:36:24.120  Malloc0
00:36:24.120   14:02:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:24.120   14:02:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024
00:36:24.120   14:02:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:24.120   14:02:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x
00:36:24.120  [2024-12-14 14:02:23.765141] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028b40/0x7ffa5bb71940) succeed.
00:36:24.120  [2024-12-14 14:02:23.775044] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028cc0/0x7ffa5bb2d940) succeed.
00:36:24.447  Read completed with error (sct=0, sc=8)
00:36:24.447  starting I/O failed
00:36:24.447  Read completed with error (sct=0, sc=8)
00:36:24.447  starting I/O failed
00:36:24.447  Write completed with error (sct=0, sc=8)
00:36:24.447  starting I/O failed
00:36:24.447  Read completed with error (sct=0, sc=8)
00:36:24.447  starting I/O failed
00:36:24.447  Write completed with error (sct=0, sc=8)
00:36:24.447  starting I/O failed
00:36:24.447  Read completed with error (sct=0, sc=8)
00:36:24.447  starting I/O failed
00:36:24.447  Read completed with error (sct=0, sc=8)
00:36:24.447  starting I/O failed
00:36:24.447  Write completed with error (sct=0, sc=8)
00:36:24.447  starting I/O failed
00:36:24.447  Read completed with error (sct=0, sc=8)
00:36:24.447  starting I/O failed
00:36:24.447  Write completed with error (sct=0, sc=8)
00:36:24.447  starting I/O failed
00:36:24.447  Write completed with error (sct=0, sc=8)
00:36:24.447  starting I/O failed
00:36:24.447  Read completed with error (sct=0, sc=8)
00:36:24.447  starting I/O failed
00:36:24.447  Read completed with error (sct=0, sc=8)
00:36:24.447  starting I/O failed
00:36:24.447  Read completed with error (sct=0, sc=8)
00:36:24.447  starting I/O failed
00:36:24.447  Write completed with error (sct=0, sc=8)
00:36:24.447  starting I/O failed
00:36:24.447  Read completed with error (sct=0, sc=8)
00:36:24.447  starting I/O failed
00:36:24.447  Write completed with error (sct=0, sc=8)
00:36:24.447  starting I/O failed
00:36:24.447  Read completed with error (sct=0, sc=8)
00:36:24.447  starting I/O failed
00:36:24.447  Read completed with error (sct=0, sc=8)
00:36:24.447  starting I/O failed
00:36:24.447  Read completed with error (sct=0, sc=8)
00:36:24.447  starting I/O failed
00:36:24.447  Write completed with error (sct=0, sc=8)
00:36:24.447  starting I/O failed
00:36:24.447  Write completed with error (sct=0, sc=8)
00:36:24.447  starting I/O failed
00:36:24.447  Write completed with error (sct=0, sc=8)
00:36:24.447  starting I/O failed
00:36:24.447  Read completed with error (sct=0, sc=8)
00:36:24.447  starting I/O failed
00:36:24.447  Write completed with error (sct=0, sc=8)
00:36:24.447  starting I/O failed
00:36:24.447  Read completed with error (sct=0, sc=8)
00:36:24.447  starting I/O failed
00:36:24.447  Read completed with error (sct=0, sc=8)
00:36:24.448  starting I/O failed
00:36:24.448  Read completed with error (sct=0, sc=8)
00:36:24.448  starting I/O failed
00:36:24.448  Read completed with error (sct=0, sc=8)
00:36:24.448  starting I/O failed
00:36:24.448  Read completed with error (sct=0, sc=8)
00:36:24.448  starting I/O failed
00:36:24.448  Write completed with error (sct=0, sc=8)
00:36:24.448  starting I/O failed
00:36:24.448  Read completed with error (sct=0, sc=8)
00:36:24.448  starting I/O failed
00:36:24.448  [2024-12-14 14:02:23.975322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3
00:36:24.448   14:02:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:24.448   14:02:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:36:24.448   14:02:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:24.448   14:02:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x
00:36:24.448   14:02:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:24.448   14:02:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:36:24.448   14:02:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:24.448   14:02:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x
00:36:24.448   14:02:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:24.448   14:02:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420
00:36:24.448   14:02:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:24.448   14:02:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x
00:36:24.448  [2024-12-14 14:02:24.057754] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 ***
00:36:24.448   14:02:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:24.448   14:02:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420
00:36:24.448   14:02:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:24.448   14:02:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x
00:36:24.448   14:02:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:24.448   14:02:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3546780
00:36:25.387  Read completed with error (sct=0, sc=8)
00:36:25.387  starting I/O failed
00:36:25.387  Read completed with error (sct=0, sc=8)
00:36:25.387  starting I/O failed
00:36:25.387  Write completed with error (sct=0, sc=8)
00:36:25.387  starting I/O failed
00:36:25.387  Write completed with error (sct=0, sc=8)
00:36:25.387  starting I/O failed
00:36:25.387  Read completed with error (sct=0, sc=8)
00:36:25.387  starting I/O failed
00:36:25.387  Write completed with error (sct=0, sc=8)
00:36:25.387  starting I/O failed
00:36:25.387  Read completed with error (sct=0, sc=8)
00:36:25.387  starting I/O failed
00:36:25.387  Read completed with error (sct=0, sc=8)
00:36:25.387  starting I/O failed
00:36:25.387  Write completed with error (sct=0, sc=8)
00:36:25.387  starting I/O failed
00:36:25.387  Write completed with error (sct=0, sc=8)
00:36:25.387  starting I/O failed
00:36:25.387  Read completed with error (sct=0, sc=8)
00:36:25.387  starting I/O failed
00:36:25.387  Read completed with error (sct=0, sc=8)
00:36:25.387  starting I/O failed
00:36:25.387  Write completed with error (sct=0, sc=8)
00:36:25.387  starting I/O failed
00:36:25.387  Write completed with error (sct=0, sc=8)
00:36:25.387  starting I/O failed
00:36:25.387  Write completed with error (sct=0, sc=8)
00:36:25.387  starting I/O failed
00:36:25.387  Read completed with error (sct=0, sc=8)
00:36:25.387  starting I/O failed
00:36:25.387  Read completed with error (sct=0, sc=8)
00:36:25.387  starting I/O failed
00:36:25.387  Read completed with error (sct=0, sc=8)
00:36:25.387  starting I/O failed
00:36:25.387  Read completed with error (sct=0, sc=8)
00:36:25.387  starting I/O failed
00:36:25.387  Write completed with error (sct=0, sc=8)
00:36:25.387  starting I/O failed
00:36:25.387  Read completed with error (sct=0, sc=8)
00:36:25.387  starting I/O failed
00:36:25.387  Write completed with error (sct=0, sc=8)
00:36:25.387  starting I/O failed
00:36:25.387  Read completed with error (sct=0, sc=8)
00:36:25.387  starting I/O failed
00:36:25.387  Write completed with error (sct=0, sc=8)
00:36:25.387  starting I/O failed
00:36:25.387  Read completed with error (sct=0, sc=8)
00:36:25.387  starting I/O failed
00:36:25.387  Read completed with error (sct=0, sc=8)
00:36:25.387  starting I/O failed
00:36:25.387  Read completed with error (sct=0, sc=8)
00:36:25.387  starting I/O failed
00:36:25.387  Write completed with error (sct=0, sc=8)
00:36:25.387  starting I/O failed
00:36:25.387  Read completed with error (sct=0, sc=8)
00:36:25.387  starting I/O failed
00:36:25.387  Write completed with error (sct=0, sc=8)
00:36:25.387  starting I/O failed
00:36:25.387  Read completed with error (sct=0, sc=8)
00:36:25.387  starting I/O failed
00:36:25.387  Write completed with error (sct=0, sc=8)
00:36:25.387  starting I/O failed
00:36:25.387  [2024-12-14 14:02:24.980844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:25.387  [2024-12-14 14:02:24.994993] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:25.387  [2024-12-14 14:02:24.995086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:25.387  [2024-12-14 14:02:24.995120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:25.387  [2024-12-14 14:02:24.995137] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:25.388  [2024-12-14 14:02:24.995152] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:25.388  [2024-12-14 14:02:25.004735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:25.388  qpair failed and we were unable to recover it.
00:36:25.388  [2024-12-14 14:02:25.014492] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:25.388  [2024-12-14 14:02:25.014566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:25.388  [2024-12-14 14:02:25.014591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:25.388  [2024-12-14 14:02:25.014609] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:25.388  [2024-12-14 14:02:25.014622] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:25.388  [2024-12-14 14:02:25.024675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:25.388  qpair failed and we were unable to recover it.
00:36:25.388  [2024-12-14 14:02:25.034573] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:25.388  [2024-12-14 14:02:25.034638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:25.388  [2024-12-14 14:02:25.034665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:25.388  [2024-12-14 14:02:25.034679] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:25.388  [2024-12-14 14:02:25.034693] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:25.388  [2024-12-14 14:02:25.044812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:25.388  qpair failed and we were unable to recover it.
00:36:25.388  [2024-12-14 14:02:25.054556] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:25.388  [2024-12-14 14:02:25.054630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:25.388  [2024-12-14 14:02:25.054655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:25.388  [2024-12-14 14:02:25.054671] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:25.388  [2024-12-14 14:02:25.054682] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:25.388  [2024-12-14 14:02:25.065008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:25.388  qpair failed and we were unable to recover it.
00:36:25.388  [2024-12-14 14:02:25.074717] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:25.388  [2024-12-14 14:02:25.074788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:25.388  [2024-12-14 14:02:25.074815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:25.388  [2024-12-14 14:02:25.074829] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:25.388  [2024-12-14 14:02:25.074843] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:25.388  [2024-12-14 14:02:25.084954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:25.388  qpair failed and we were unable to recover it.
00:36:25.388  [2024-12-14 14:02:25.094729] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:25.388  [2024-12-14 14:02:25.094797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:25.388  [2024-12-14 14:02:25.094825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:25.388  [2024-12-14 14:02:25.094841] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:25.388  [2024-12-14 14:02:25.094853] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:25.388  [2024-12-14 14:02:25.104984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:25.388  qpair failed and we were unable to recover it.
00:36:25.388  [2024-12-14 14:02:25.114735] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:25.388  [2024-12-14 14:02:25.114802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:25.388  [2024-12-14 14:02:25.114832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:25.388  [2024-12-14 14:02:25.114845] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:25.388  [2024-12-14 14:02:25.114859] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:25.646  [2024-12-14 14:02:25.125169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:25.646  qpair failed and we were unable to recover it.
00:36:25.646  [2024-12-14 14:02:25.136780] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:25.646  [2024-12-14 14:02:25.136846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:25.646  [2024-12-14 14:02:25.136870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:25.646  [2024-12-14 14:02:25.136887] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:25.646  [2024-12-14 14:02:25.136898] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:25.646  [2024-12-14 14:02:25.145304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:25.646  qpair failed and we were unable to recover it.
00:36:25.646  [2024-12-14 14:02:25.155040] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:25.646  [2024-12-14 14:02:25.155110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:25.646  [2024-12-14 14:02:25.155137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:25.646  [2024-12-14 14:02:25.155150] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:25.646  [2024-12-14 14:02:25.155166] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:25.646  [2024-12-14 14:02:25.165087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:25.646  qpair failed and we were unable to recover it.
00:36:25.646  [2024-12-14 14:02:25.174970] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:25.646  [2024-12-14 14:02:25.175033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:25.646  [2024-12-14 14:02:25.175057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:25.646  [2024-12-14 14:02:25.175076] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:25.646  [2024-12-14 14:02:25.175088] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:25.646  [2024-12-14 14:02:25.185336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:25.646  qpair failed and we were unable to recover it.
00:36:25.646  [2024-12-14 14:02:25.195090] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:25.646  [2024-12-14 14:02:25.195148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:25.646  [2024-12-14 14:02:25.195175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:25.646  [2024-12-14 14:02:25.195189] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:25.646  [2024-12-14 14:02:25.195203] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:25.646  [2024-12-14 14:02:25.205106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:25.646  qpair failed and we were unable to recover it.
00:36:25.646  [2024-12-14 14:02:25.215101] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:25.646  [2024-12-14 14:02:25.215169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:25.646  [2024-12-14 14:02:25.215193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:25.646  [2024-12-14 14:02:25.215211] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:25.646  [2024-12-14 14:02:25.215222] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:25.646  [2024-12-14 14:02:25.225395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:25.646  qpair failed and we were unable to recover it.
00:36:25.646  [2024-12-14 14:02:25.235238] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:25.646  [2024-12-14 14:02:25.235302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:25.646  [2024-12-14 14:02:25.235329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:25.646  [2024-12-14 14:02:25.235343] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:25.646  [2024-12-14 14:02:25.235356] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:25.646  [2024-12-14 14:02:25.245348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:25.646  qpair failed and we were unable to recover it.
00:36:25.646  [2024-12-14 14:02:25.255226] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:25.646  [2024-12-14 14:02:25.255296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:25.646  [2024-12-14 14:02:25.255321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:25.646  [2024-12-14 14:02:25.255337] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:25.646  [2024-12-14 14:02:25.255349] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:25.646  [2024-12-14 14:02:25.265367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:25.646  qpair failed and we were unable to recover it.
00:36:25.646  [2024-12-14 14:02:25.275344] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:25.646  [2024-12-14 14:02:25.275401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:25.646  [2024-12-14 14:02:25.275427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:25.646  [2024-12-14 14:02:25.275442] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:25.647  [2024-12-14 14:02:25.275455] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:25.647  [2024-12-14 14:02:25.288097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:25.647  qpair failed and we were unable to recover it.
00:36:25.647  [2024-12-14 14:02:25.295366] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:25.647  [2024-12-14 14:02:25.295432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:25.647  [2024-12-14 14:02:25.295456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:25.647  [2024-12-14 14:02:25.295477] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:25.647  [2024-12-14 14:02:25.295489] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:25.647  [2024-12-14 14:02:25.305687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:25.647  qpair failed and we were unable to recover it.
00:36:25.647  [2024-12-14 14:02:25.315426] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:25.647  [2024-12-14 14:02:25.315489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:25.647  [2024-12-14 14:02:25.315515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:25.647  [2024-12-14 14:02:25.315529] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:25.647  [2024-12-14 14:02:25.315542] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:25.647  [2024-12-14 14:02:25.325903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:25.647  qpair failed and we were unable to recover it.
00:36:25.647  [2024-12-14 14:02:25.335449] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:25.647  [2024-12-14 14:02:25.335518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:25.647  [2024-12-14 14:02:25.335542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:25.647  [2024-12-14 14:02:25.335558] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:25.647  [2024-12-14 14:02:25.335569] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:25.647  [2024-12-14 14:02:25.345812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:25.647  qpair failed and we were unable to recover it.
00:36:25.647  [2024-12-14 14:02:25.355457] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:25.647  [2024-12-14 14:02:25.355520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:25.647  [2024-12-14 14:02:25.355547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:25.647  [2024-12-14 14:02:25.355561] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:25.647  [2024-12-14 14:02:25.355575] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:25.647  [2024-12-14 14:02:25.365756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:25.647  qpair failed and we were unable to recover it.
00:36:25.647  [2024-12-14 14:02:25.375668] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:25.647  [2024-12-14 14:02:25.375738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:25.647  [2024-12-14 14:02:25.375763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:25.647  [2024-12-14 14:02:25.375778] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:25.647  [2024-12-14 14:02:25.375790] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:25.906  [2024-12-14 14:02:25.385846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:25.906  qpair failed and we were unable to recover it.
00:36:25.906  [2024-12-14 14:02:25.395627] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:25.906  [2024-12-14 14:02:25.395697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:25.906  [2024-12-14 14:02:25.395729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:25.906  [2024-12-14 14:02:25.395743] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:25.906  [2024-12-14 14:02:25.395756] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:25.906  [2024-12-14 14:02:25.405753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:25.906  qpair failed and we were unable to recover it.
00:36:25.906  [2024-12-14 14:02:25.415692] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:25.906  [2024-12-14 14:02:25.415754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:25.906  [2024-12-14 14:02:25.415778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:25.906  [2024-12-14 14:02:25.415794] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:25.906  [2024-12-14 14:02:25.415805] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:25.906  [2024-12-14 14:02:25.426096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:25.906  qpair failed and we were unable to recover it.
00:36:25.906  [2024-12-14 14:02:25.435691] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:25.906  [2024-12-14 14:02:25.435751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:25.906  [2024-12-14 14:02:25.435786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:25.906  [2024-12-14 14:02:25.435800] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:25.906  [2024-12-14 14:02:25.435813] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:25.906  [2024-12-14 14:02:25.446037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:25.906  qpair failed and we were unable to recover it.
00:36:25.906  [2024-12-14 14:02:25.455848] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:25.906  [2024-12-14 14:02:25.455910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:25.906  [2024-12-14 14:02:25.455941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:25.906  [2024-12-14 14:02:25.455957] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:25.906  [2024-12-14 14:02:25.455969] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:25.906  [2024-12-14 14:02:25.466013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:25.906  qpair failed and we were unable to recover it.
00:36:25.906  [2024-12-14 14:02:25.475809] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:25.906  [2024-12-14 14:02:25.475873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:25.906  [2024-12-14 14:02:25.475900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:25.906  [2024-12-14 14:02:25.475914] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:25.906  [2024-12-14 14:02:25.475941] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:25.906  [2024-12-14 14:02:25.486221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:25.906  qpair failed and we were unable to recover it.
00:36:25.906  [2024-12-14 14:02:25.495904] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:25.906  [2024-12-14 14:02:25.495969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:25.906  [2024-12-14 14:02:25.495994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:25.906  [2024-12-14 14:02:25.496010] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:25.906  [2024-12-14 14:02:25.496022] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:25.906  [2024-12-14 14:02:25.506262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:25.906  qpair failed and we were unable to recover it.
00:36:25.906  [2024-12-14 14:02:25.516027] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:25.906  [2024-12-14 14:02:25.516092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:25.906  [2024-12-14 14:02:25.516119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:25.906  [2024-12-14 14:02:25.516136] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:25.906  [2024-12-14 14:02:25.516150] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:25.906  [2024-12-14 14:02:25.526612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:25.906  qpair failed and we were unable to recover it.
00:36:25.906  [2024-12-14 14:02:25.536058] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:25.906  [2024-12-14 14:02:25.536126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:25.906  [2024-12-14 14:02:25.536150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:25.906  [2024-12-14 14:02:25.536166] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:25.906  [2024-12-14 14:02:25.536177] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:25.906  [2024-12-14 14:02:25.546642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:25.906  qpair failed and we were unable to recover it.
00:36:25.906  [2024-12-14 14:02:25.556190] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:25.906  [2024-12-14 14:02:25.556246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:25.906  [2024-12-14 14:02:25.556273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:25.906  [2024-12-14 14:02:25.556287] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:25.906  [2024-12-14 14:02:25.556301] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:25.906  [2024-12-14 14:02:25.566595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:25.906  qpair failed and we were unable to recover it.
00:36:25.906  [2024-12-14 14:02:25.576242] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:25.906  [2024-12-14 14:02:25.576307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:25.906  [2024-12-14 14:02:25.576332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:25.906  [2024-12-14 14:02:25.576347] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:25.906  [2024-12-14 14:02:25.576359] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:25.906  [2024-12-14 14:02:25.586768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:25.906  qpair failed and we were unable to recover it.
00:36:25.906  [2024-12-14 14:02:25.596328] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:25.907  [2024-12-14 14:02:25.596392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:25.907  [2024-12-14 14:02:25.596419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:25.907  [2024-12-14 14:02:25.596433] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:25.907  [2024-12-14 14:02:25.596446] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:25.907  [2024-12-14 14:02:25.606567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:25.907  qpair failed and we were unable to recover it.
00:36:25.907  [2024-12-14 14:02:25.616339] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:25.907  [2024-12-14 14:02:25.616404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:25.907  [2024-12-14 14:02:25.616428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:25.907  [2024-12-14 14:02:25.616446] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:25.907  [2024-12-14 14:02:25.616458] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:25.907  [2024-12-14 14:02:25.627043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:25.907  qpair failed and we were unable to recover it.
00:36:25.907  [2024-12-14 14:02:25.636443] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:25.907  [2024-12-14 14:02:25.636510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:25.907  [2024-12-14 14:02:25.636537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:25.907  [2024-12-14 14:02:25.636552] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:25.907  [2024-12-14 14:02:25.636566] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:26.166  [2024-12-14 14:02:25.646504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:26.166  qpair failed and we were unable to recover it.
00:36:26.166  [2024-12-14 14:02:25.656470] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:26.166  [2024-12-14 14:02:25.656539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:26.166  [2024-12-14 14:02:25.656563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:26.166  [2024-12-14 14:02:25.656579] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:26.166  [2024-12-14 14:02:25.656591] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:26.166  [2024-12-14 14:02:25.666700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:26.166  qpair failed and we were unable to recover it.
00:36:26.166  [2024-12-14 14:02:25.676603] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:26.166  [2024-12-14 14:02:25.676670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:26.166  [2024-12-14 14:02:25.676697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:26.166  [2024-12-14 14:02:25.676710] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:26.166  [2024-12-14 14:02:25.676724] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:26.166  [2024-12-14 14:02:25.686754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:26.166  qpair failed and we were unable to recover it.
00:36:26.166  [2024-12-14 14:02:25.696669] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:26.166  [2024-12-14 14:02:25.696735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:26.166  [2024-12-14 14:02:25.696760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:26.166  [2024-12-14 14:02:25.696775] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:26.166  [2024-12-14 14:02:25.696787] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:26.166  [2024-12-14 14:02:25.706898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:26.166  qpair failed and we were unable to recover it.
00:36:26.166  [2024-12-14 14:02:25.716677] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:26.166  [2024-12-14 14:02:25.716743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:26.166  [2024-12-14 14:02:25.716771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:26.166  [2024-12-14 14:02:25.716784] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:26.166  [2024-12-14 14:02:25.716798] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:26.166  [2024-12-14 14:02:25.726779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:26.166  qpair failed and we were unable to recover it.
00:36:26.166  [2024-12-14 14:02:25.736733] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:26.166  [2024-12-14 14:02:25.736802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:26.166  [2024-12-14 14:02:25.736827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:26.166  [2024-12-14 14:02:25.736842] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:26.167  [2024-12-14 14:02:25.736854] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:26.167  [2024-12-14 14:02:25.746936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:26.167  qpair failed and we were unable to recover it.
00:36:26.167  [2024-12-14 14:02:25.756732] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:26.167  [2024-12-14 14:02:25.756794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:26.167  [2024-12-14 14:02:25.756824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:26.167  [2024-12-14 14:02:25.756837] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:26.167  [2024-12-14 14:02:25.756851] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:26.167  [2024-12-14 14:02:25.766905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:26.167  qpair failed and we were unable to recover it.
00:36:26.167  [2024-12-14 14:02:25.776777] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:26.167  [2024-12-14 14:02:25.776842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:26.167  [2024-12-14 14:02:25.776869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:26.167  [2024-12-14 14:02:25.776885] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:26.167  [2024-12-14 14:02:25.776897] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:26.167  [2024-12-14 14:02:25.787015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:26.167  qpair failed and we were unable to recover it.
00:36:26.167  [2024-12-14 14:02:25.796973] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:26.167  [2024-12-14 14:02:25.797037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:26.167  [2024-12-14 14:02:25.797064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:26.167  [2024-12-14 14:02:25.797078] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:26.167  [2024-12-14 14:02:25.797094] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:26.167  [2024-12-14 14:02:25.806938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:26.167  qpair failed and we were unable to recover it.
00:36:26.167  [2024-12-14 14:02:25.816891] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:26.167  [2024-12-14 14:02:25.816955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:26.167  [2024-12-14 14:02:25.816980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:26.167  [2024-12-14 14:02:25.816998] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:26.167  [2024-12-14 14:02:25.817009] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:26.167  [2024-12-14 14:02:25.827548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:26.167  qpair failed and we were unable to recover it.
00:36:26.167  [2024-12-14 14:02:25.837024] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:26.167  [2024-12-14 14:02:25.837086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:26.167  [2024-12-14 14:02:25.837113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:26.167  [2024-12-14 14:02:25.837126] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:26.167  [2024-12-14 14:02:25.837139] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:26.167  [2024-12-14 14:02:25.847495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:26.167  qpair failed and we were unable to recover it.
00:36:26.167  [2024-12-14 14:02:25.857096] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:26.167  [2024-12-14 14:02:25.857161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:26.167  [2024-12-14 14:02:25.857185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:26.167  [2024-12-14 14:02:25.857200] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:26.167  [2024-12-14 14:02:25.857215] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:26.167  [2024-12-14 14:02:25.867190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:26.167  qpair failed and we were unable to recover it.
00:36:26.167  [2024-12-14 14:02:25.877193] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:26.167  [2024-12-14 14:02:25.877251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:26.167  [2024-12-14 14:02:25.877277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:26.167  [2024-12-14 14:02:25.877291] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:26.167  [2024-12-14 14:02:25.877305] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:26.167  [2024-12-14 14:02:25.889954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:26.167  qpair failed and we were unable to recover it.
00:36:26.167  [2024-12-14 14:02:25.897214] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:26.167  [2024-12-14 14:02:25.897278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:26.167  [2024-12-14 14:02:25.897302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:26.167  [2024-12-14 14:02:25.897318] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:26.167  [2024-12-14 14:02:25.897329] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:26.426  [2024-12-14 14:02:25.907643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:26.427  qpair failed and we were unable to recover it.
00:36:26.427  [2024-12-14 14:02:25.917208] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:26.427  [2024-12-14 14:02:25.917270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:26.427  [2024-12-14 14:02:25.917298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:26.427  [2024-12-14 14:02:25.917312] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:26.427  [2024-12-14 14:02:25.917325] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:26.427  [2024-12-14 14:02:25.927653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:26.427  qpair failed and we were unable to recover it.
00:36:26.427  [2024-12-14 14:02:25.937269] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:26.427  [2024-12-14 14:02:25.937331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:26.427  [2024-12-14 14:02:25.937355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:26.427  [2024-12-14 14:02:25.937373] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:26.427  [2024-12-14 14:02:25.937385] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:26.427  [2024-12-14 14:02:25.947687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:26.427  qpair failed and we were unable to recover it.
00:36:26.427  [2024-12-14 14:02:25.957390] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:26.427  [2024-12-14 14:02:25.957449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:26.427  [2024-12-14 14:02:25.957476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:26.427  [2024-12-14 14:02:25.957490] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:26.427  [2024-12-14 14:02:25.957504] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:26.427  [2024-12-14 14:02:25.967485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:26.427  qpair failed and we were unable to recover it.
00:36:26.427  [2024-12-14 14:02:25.977309] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:26.427  [2024-12-14 14:02:25.977373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:26.427  [2024-12-14 14:02:25.977397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:26.427  [2024-12-14 14:02:25.977413] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:26.427  [2024-12-14 14:02:25.977425] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:26.427  [2024-12-14 14:02:25.987778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:26.427  qpair failed and we were unable to recover it.
00:36:26.427  [2024-12-14 14:02:25.997478] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:26.427  [2024-12-14 14:02:25.997546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:26.427  [2024-12-14 14:02:25.997573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:26.427  [2024-12-14 14:02:25.997587] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:26.427  [2024-12-14 14:02:25.997600] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:26.427  [2024-12-14 14:02:26.007856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:26.427  qpair failed and we were unable to recover it.
00:36:26.427  [2024-12-14 14:02:26.017474] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:26.427  [2024-12-14 14:02:26.017542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:26.427  [2024-12-14 14:02:26.017566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:26.427  [2024-12-14 14:02:26.017581] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:26.427  [2024-12-14 14:02:26.017593] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:26.427  [2024-12-14 14:02:26.027725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:26.427  qpair failed and we were unable to recover it.
00:36:26.427  [2024-12-14 14:02:26.039314] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:26.427  [2024-12-14 14:02:26.039387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:26.427  [2024-12-14 14:02:26.039414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:26.427  [2024-12-14 14:02:26.039428] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:26.427  [2024-12-14 14:02:26.039442] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:26.427  [2024-12-14 14:02:26.047544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:26.427  qpair failed and we were unable to recover it.
00:36:26.427  [2024-12-14 14:02:26.057506] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:26.427  [2024-12-14 14:02:26.057572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:26.427  [2024-12-14 14:02:26.057596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:26.427  [2024-12-14 14:02:26.057612] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:26.427  [2024-12-14 14:02:26.057625] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:26.427  [2024-12-14 14:02:26.067822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:26.427  qpair failed and we were unable to recover it.
00:36:26.427  [2024-12-14 14:02:26.077561] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:26.427  [2024-12-14 14:02:26.077617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:26.427  [2024-12-14 14:02:26.077647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:26.427  [2024-12-14 14:02:26.077661] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:26.427  [2024-12-14 14:02:26.077676] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:26.427  [2024-12-14 14:02:26.087798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:26.427  qpair failed and we were unable to recover it.
00:36:26.427  [2024-12-14 14:02:26.097612] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:26.427  [2024-12-14 14:02:26.097675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:26.427  [2024-12-14 14:02:26.097699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:26.427  [2024-12-14 14:02:26.097716] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:26.427  [2024-12-14 14:02:26.097727] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:26.427  [2024-12-14 14:02:26.108243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:26.427  qpair failed and we were unable to recover it.
00:36:26.427  [2024-12-14 14:02:26.117820] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:26.427  [2024-12-14 14:02:26.117879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:26.427  [2024-12-14 14:02:26.117908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:26.427  [2024-12-14 14:02:26.117922] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:26.427  [2024-12-14 14:02:26.117945] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:26.427  [2024-12-14 14:02:26.128237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:26.427  qpair failed and we were unable to recover it.
00:36:26.427  [2024-12-14 14:02:26.137966] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:26.427  [2024-12-14 14:02:26.138033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:26.427  [2024-12-14 14:02:26.138057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:26.427  [2024-12-14 14:02:26.138074] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:26.427  [2024-12-14 14:02:26.138085] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:26.427  [2024-12-14 14:02:26.148758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:26.427  qpair failed and we were unable to recover it.
00:36:26.427  [2024-12-14 14:02:26.157972] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:26.428  [2024-12-14 14:02:26.158026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:26.428  [2024-12-14 14:02:26.158053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:26.428  [2024-12-14 14:02:26.158066] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:26.428  [2024-12-14 14:02:26.158080] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:26.687  [2024-12-14 14:02:26.168450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:26.687  qpair failed and we were unable to recover it.
00:36:26.687  [2024-12-14 14:02:26.178081] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:26.687  [2024-12-14 14:02:26.178148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:26.687  [2024-12-14 14:02:26.178172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:26.687  [2024-12-14 14:02:26.178188] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:26.687  [2024-12-14 14:02:26.178199] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:26.687  [2024-12-14 14:02:26.190101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:26.687  qpair failed and we were unable to recover it.
00:36:26.687  [2024-12-14 14:02:26.198172] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:26.687  [2024-12-14 14:02:26.198239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:26.687  [2024-12-14 14:02:26.198266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:26.687  [2024-12-14 14:02:26.198280] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:26.687  [2024-12-14 14:02:26.198296] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:26.687  [2024-12-14 14:02:26.208460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:26.687  qpair failed and we were unable to recover it.
00:36:26.687  [2024-12-14 14:02:26.218062] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:26.687  [2024-12-14 14:02:26.218126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:26.687  [2024-12-14 14:02:26.218150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:26.687  [2024-12-14 14:02:26.218166] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:26.687  [2024-12-14 14:02:26.218178] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:26.687  [2024-12-14 14:02:26.228915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:26.687  qpair failed and we were unable to recover it.
00:36:26.687  [2024-12-14 14:02:26.238405] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:26.687  [2024-12-14 14:02:26.238470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:26.687  [2024-12-14 14:02:26.238497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:26.687  [2024-12-14 14:02:26.238510] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:26.687  [2024-12-14 14:02:26.238524] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:26.688  [2024-12-14 14:02:26.248481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:26.688  qpair failed and we were unable to recover it.
00:36:26.688  [2024-12-14 14:02:26.258323] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:26.688  [2024-12-14 14:02:26.258387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:26.688  [2024-12-14 14:02:26.258411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:26.688  [2024-12-14 14:02:26.258430] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:26.688  [2024-12-14 14:02:26.258442] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:26.688  [2024-12-14 14:02:26.269137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:26.688  qpair failed and we were unable to recover it.
00:36:26.688  [2024-12-14 14:02:26.278492] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:26.688  [2024-12-14 14:02:26.278551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:26.688  [2024-12-14 14:02:26.278576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:26.688  [2024-12-14 14:02:26.278590] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:26.688  [2024-12-14 14:02:26.278601] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:26.688  [2024-12-14 14:02:26.288558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:26.688  qpair failed and we were unable to recover it.
00:36:26.688  [2024-12-14 14:02:26.298399] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:26.688  [2024-12-14 14:02:26.298458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:26.688  [2024-12-14 14:02:26.298483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:26.688  [2024-12-14 14:02:26.298496] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:26.688  [2024-12-14 14:02:26.298508] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:26.688  [2024-12-14 14:02:26.308919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:26.688  qpair failed and we were unable to recover it.
00:36:26.688  [2024-12-14 14:02:26.318700] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:26.688  [2024-12-14 14:02:26.318756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:26.688  [2024-12-14 14:02:26.318780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:26.688  [2024-12-14 14:02:26.318794] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:26.688  [2024-12-14 14:02:26.318806] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:26.688  [2024-12-14 14:02:26.328922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:26.688  qpair failed and we were unable to recover it.
00:36:26.688  [2024-12-14 14:02:26.338538] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:26.688  [2024-12-14 14:02:26.338598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:26.688  [2024-12-14 14:02:26.338623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:26.688  [2024-12-14 14:02:26.338638] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:26.688  [2024-12-14 14:02:26.338649] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:26.688  [2024-12-14 14:02:26.349109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:26.688  qpair failed and we were unable to recover it.
00:36:26.688  [2024-12-14 14:02:26.358541] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:26.688  [2024-12-14 14:02:26.358602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:26.688  [2024-12-14 14:02:26.358626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:26.688  [2024-12-14 14:02:26.358640] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:26.688  [2024-12-14 14:02:26.358651] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:26.688  [2024-12-14 14:02:26.368822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:26.688  qpair failed and we were unable to recover it.
00:36:26.688  [2024-12-14 14:02:26.378633] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:26.688  [2024-12-14 14:02:26.378696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:26.688  [2024-12-14 14:02:26.378720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:26.688  [2024-12-14 14:02:26.378734] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:26.688  [2024-12-14 14:02:26.378746] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:26.688  [2024-12-14 14:02:26.389153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:26.688  qpair failed and we were unable to recover it.
00:36:26.688  [2024-12-14 14:02:26.398765] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:26.688  [2024-12-14 14:02:26.398827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:26.688  [2024-12-14 14:02:26.398856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:26.688  [2024-12-14 14:02:26.398871] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:26.688  [2024-12-14 14:02:26.398883] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:26.688  [2024-12-14 14:02:26.409052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:26.688  qpair failed and we were unable to recover it.
00:36:26.688  [2024-12-14 14:02:26.418707] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:26.688  [2024-12-14 14:02:26.418766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:26.688  [2024-12-14 14:02:26.418791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:26.688  [2024-12-14 14:02:26.418804] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:26.688  [2024-12-14 14:02:26.418816] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:26.948  [2024-12-14 14:02:26.428992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:26.948  qpair failed and we were unable to recover it.
00:36:26.948  [2024-12-14 14:02:26.438858] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:26.948  [2024-12-14 14:02:26.438917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:26.948  [2024-12-14 14:02:26.438949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:26.948  [2024-12-14 14:02:26.438963] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:26.948  [2024-12-14 14:02:26.438974] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:26.948  [2024-12-14 14:02:26.449127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:26.948  qpair failed and we were unable to recover it.
00:36:26.948  [2024-12-14 14:02:26.458910] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:26.948  [2024-12-14 14:02:26.458972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:26.948  [2024-12-14 14:02:26.458996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:26.948  [2024-12-14 14:02:26.459013] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:26.948  [2024-12-14 14:02:26.459025] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:26.948  [2024-12-14 14:02:26.469424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:26.948  qpair failed and we were unable to recover it.
00:36:26.948  [2024-12-14 14:02:26.478893] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:26.948  [2024-12-14 14:02:26.478957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:26.948  [2024-12-14 14:02:26.478983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:26.948  [2024-12-14 14:02:26.478996] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:26.948  [2024-12-14 14:02:26.479008] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:26.948  [2024-12-14 14:02:26.489348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:26.948  qpair failed and we were unable to recover it.
00:36:26.948  [2024-12-14 14:02:26.498877] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:26.948  [2024-12-14 14:02:26.498942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:26.948  [2024-12-14 14:02:26.498967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:26.948  [2024-12-14 14:02:26.498981] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:26.948  [2024-12-14 14:02:26.498993] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:26.948  [2024-12-14 14:02:26.509155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:26.948  qpair failed and we were unable to recover it.
00:36:26.948  [2024-12-14 14:02:26.519057] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:26.948  [2024-12-14 14:02:26.519111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:26.948  [2024-12-14 14:02:26.519135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:26.948  [2024-12-14 14:02:26.519149] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:26.948  [2024-12-14 14:02:26.519161] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:26.948  [2024-12-14 14:02:26.529354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:26.948  qpair failed and we were unable to recover it.
00:36:26.948  [2024-12-14 14:02:26.539046] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:26.948  [2024-12-14 14:02:26.539107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:26.948  [2024-12-14 14:02:26.539131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:26.948  [2024-12-14 14:02:26.539145] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:26.948  [2024-12-14 14:02:26.539161] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:26.948  [2024-12-14 14:02:26.549537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:26.948  qpair failed and we were unable to recover it.
00:36:26.948  [2024-12-14 14:02:26.559163] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:26.948  [2024-12-14 14:02:26.559226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:26.948  [2024-12-14 14:02:26.559250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:26.948  [2024-12-14 14:02:26.559264] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:26.948  [2024-12-14 14:02:26.559276] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:26.948  [2024-12-14 14:02:26.569195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:26.948  qpair failed and we were unable to recover it.
00:36:26.948  [2024-12-14 14:02:26.579136] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:26.948  [2024-12-14 14:02:26.579193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:26.949  [2024-12-14 14:02:26.579218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:26.949  [2024-12-14 14:02:26.579231] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:26.949  [2024-12-14 14:02:26.579243] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:26.949  [2024-12-14 14:02:26.589448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:26.949  qpair failed and we were unable to recover it.
00:36:26.949  [2024-12-14 14:02:26.599328] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:26.949  [2024-12-14 14:02:26.599387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:26.949  [2024-12-14 14:02:26.599411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:26.949  [2024-12-14 14:02:26.599425] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:26.949  [2024-12-14 14:02:26.599436] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:26.949  [2024-12-14 14:02:26.609437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:26.949  qpair failed and we were unable to recover it.
00:36:26.949  [2024-12-14 14:02:26.619293] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:26.949  [2024-12-14 14:02:26.619353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:26.949  [2024-12-14 14:02:26.619377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:26.949  [2024-12-14 14:02:26.619390] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:26.949  [2024-12-14 14:02:26.619401] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:26.949  [2024-12-14 14:02:26.629602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:26.949  qpair failed and we were unable to recover it.
00:36:26.949  [2024-12-14 14:02:26.639395] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:26.949  [2024-12-14 14:02:26.639455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:26.949  [2024-12-14 14:02:26.639479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:26.949  [2024-12-14 14:02:26.639492] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:26.949  [2024-12-14 14:02:26.639503] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:26.949  [2024-12-14 14:02:26.649551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:26.949  qpair failed and we were unable to recover it.
00:36:26.949  [2024-12-14 14:02:26.659424] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:26.949  [2024-12-14 14:02:26.659486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:26.949  [2024-12-14 14:02:26.659510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:26.949  [2024-12-14 14:02:26.659524] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:26.949  [2024-12-14 14:02:26.659536] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:26.949  [2024-12-14 14:02:26.669953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:26.949  qpair failed and we were unable to recover it.
00:36:26.949  [2024-12-14 14:02:26.679474] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:26.949  [2024-12-14 14:02:26.679529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:26.949  [2024-12-14 14:02:26.679553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:26.949  [2024-12-14 14:02:26.679567] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:26.949  [2024-12-14 14:02:26.679578] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:27.209  [2024-12-14 14:02:26.689672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:27.209  qpair failed and we were unable to recover it.
00:36:27.209  [2024-12-14 14:02:26.699601] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:27.209  [2024-12-14 14:02:26.699662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:27.209  [2024-12-14 14:02:26.699686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:27.209  [2024-12-14 14:02:26.699700] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:27.209  [2024-12-14 14:02:26.699711] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:27.209  [2024-12-14 14:02:26.710026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:27.209  qpair failed and we were unable to recover it.
00:36:27.209  [2024-12-14 14:02:26.719688] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:27.209  [2024-12-14 14:02:26.719749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:27.209  [2024-12-14 14:02:26.719777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:27.209  [2024-12-14 14:02:26.719791] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:27.209  [2024-12-14 14:02:26.719802] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:27.209  [2024-12-14 14:02:26.729938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:27.209  qpair failed and we were unable to recover it.
00:36:27.209  [2024-12-14 14:02:26.739615] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:27.209  [2024-12-14 14:02:26.739671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:27.209  [2024-12-14 14:02:26.739695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:27.209  [2024-12-14 14:02:26.739709] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:27.209  [2024-12-14 14:02:26.739720] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:27.209  [2024-12-14 14:02:26.749926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:27.209  qpair failed and we were unable to recover it.
00:36:27.209  [2024-12-14 14:02:26.759676] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:27.209  [2024-12-14 14:02:26.759729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:27.209  [2024-12-14 14:02:26.759752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:27.209  [2024-12-14 14:02:26.759766] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:27.209  [2024-12-14 14:02:26.759778] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:27.209  [2024-12-14 14:02:26.770015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:27.209  qpair failed and we were unable to recover it.
00:36:27.209  [2024-12-14 14:02:26.779774] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:27.209  [2024-12-14 14:02:26.779835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:27.209  [2024-12-14 14:02:26.779859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:27.209  [2024-12-14 14:02:26.779872] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:27.209  [2024-12-14 14:02:26.779884] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:27.209  [2024-12-14 14:02:26.792926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:27.209  qpair failed and we were unable to recover it.
00:36:27.209  [2024-12-14 14:02:26.799804] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:27.209  [2024-12-14 14:02:26.799864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:27.209  [2024-12-14 14:02:26.799888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:27.209  [2024-12-14 14:02:26.799907] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:27.209  [2024-12-14 14:02:26.799918] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:27.209  [2024-12-14 14:02:26.810176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:27.209  qpair failed and we were unable to recover it.
00:36:27.209  [2024-12-14 14:02:26.819885] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:27.209  [2024-12-14 14:02:26.819945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:27.209  [2024-12-14 14:02:26.819969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:27.209  [2024-12-14 14:02:26.819982] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:27.209  [2024-12-14 14:02:26.819994] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:27.209  [2024-12-14 14:02:26.830367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:27.209  qpair failed and we were unable to recover it.
00:36:27.209  [2024-12-14 14:02:26.840040] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:27.209  [2024-12-14 14:02:26.840094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:27.209  [2024-12-14 14:02:26.840118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:27.209  [2024-12-14 14:02:26.840131] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:27.209  [2024-12-14 14:02:26.840143] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:27.209  [2024-12-14 14:02:26.850240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:27.209  qpair failed and we were unable to recover it.
00:36:27.209  [2024-12-14 14:02:26.860035] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:27.209  [2024-12-14 14:02:26.860091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:27.209  [2024-12-14 14:02:26.860115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:27.209  [2024-12-14 14:02:26.860129] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:27.209  [2024-12-14 14:02:26.860140] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:27.209  [2024-12-14 14:02:26.870369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:27.209  qpair failed and we were unable to recover it.
00:36:27.209  [2024-12-14 14:02:26.880294] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:27.210  [2024-12-14 14:02:26.880352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:27.210  [2024-12-14 14:02:26.880376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:27.210  [2024-12-14 14:02:26.880390] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:27.210  [2024-12-14 14:02:26.880401] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:27.210  [2024-12-14 14:02:26.890588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:27.210  qpair failed and we were unable to recover it.
00:36:27.210  [2024-12-14 14:02:26.900175] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:27.210  [2024-12-14 14:02:26.900231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:27.210  [2024-12-14 14:02:26.900256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:27.210  [2024-12-14 14:02:26.900270] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:27.210  [2024-12-14 14:02:26.900281] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:27.210  [2024-12-14 14:02:26.910849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:27.210  qpair failed and we were unable to recover it.
00:36:27.210  [2024-12-14 14:02:26.920336] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:27.210  [2024-12-14 14:02:26.920396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:27.210  [2024-12-14 14:02:26.920420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:27.210  [2024-12-14 14:02:26.920433] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:27.210  [2024-12-14 14:02:26.920444] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:27.210  [2024-12-14 14:02:26.930611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:27.210  qpair failed and we were unable to recover it.
00:36:27.210  [2024-12-14 14:02:26.942871] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:27.210  [2024-12-14 14:02:26.942934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:27.210  [2024-12-14 14:02:26.942959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:27.210  [2024-12-14 14:02:26.942972] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:27.210  [2024-12-14 14:02:26.942984] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:27.469  [2024-12-14 14:02:26.950560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:27.469  qpair failed and we were unable to recover it.
00:36:27.469  [2024-12-14 14:02:26.960455] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:27.469  [2024-12-14 14:02:26.960516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:27.469  [2024-12-14 14:02:26.960539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:27.469  [2024-12-14 14:02:26.960553] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:27.469  [2024-12-14 14:02:26.960564] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:27.469  [2024-12-14 14:02:26.970492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:27.469  qpair failed and we were unable to recover it.
00:36:27.469  [2024-12-14 14:02:26.980502] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:27.469  [2024-12-14 14:02:26.980566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:27.469  [2024-12-14 14:02:26.980590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:27.469  [2024-12-14 14:02:26.980603] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:27.469  [2024-12-14 14:02:26.980614] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:27.469  [2024-12-14 14:02:26.990764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:27.469  qpair failed and we were unable to recover it.
00:36:27.469  [2024-12-14 14:02:27.000491] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:27.469  [2024-12-14 14:02:27.000549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:27.469  [2024-12-14 14:02:27.000573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:27.469  [2024-12-14 14:02:27.000587] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:27.470  [2024-12-14 14:02:27.000599] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:27.470  [2024-12-14 14:02:27.010764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:27.470  qpair failed and we were unable to recover it.
00:36:27.470  [2024-12-14 14:02:27.020510] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:27.470  [2024-12-14 14:02:27.020572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:27.470  [2024-12-14 14:02:27.020596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:27.470  [2024-12-14 14:02:27.020610] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:27.470  [2024-12-14 14:02:27.020621] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:27.470  [2024-12-14 14:02:27.031155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:27.470  qpair failed and we were unable to recover it.
00:36:27.470  [2024-12-14 14:02:27.040556] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:27.470  [2024-12-14 14:02:27.040621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:27.470  [2024-12-14 14:02:27.040645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:27.470  [2024-12-14 14:02:27.040658] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:27.470  [2024-12-14 14:02:27.040670] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:27.470  [2024-12-14 14:02:27.051136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:27.470  qpair failed and we were unable to recover it.
00:36:27.470  [2024-12-14 14:02:27.060681] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:27.470  [2024-12-14 14:02:27.060741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:27.470  [2024-12-14 14:02:27.060769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:27.470  [2024-12-14 14:02:27.060783] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:27.470  [2024-12-14 14:02:27.060795] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:27.470  [2024-12-14 14:02:27.071307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:27.470  qpair failed and we were unable to recover it.
00:36:27.470  [2024-12-14 14:02:27.080707] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:27.470  [2024-12-14 14:02:27.080759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:27.470  [2024-12-14 14:02:27.080783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:27.470  [2024-12-14 14:02:27.080796] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:27.470  [2024-12-14 14:02:27.080807] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:27.470  [2024-12-14 14:02:27.093481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:27.470  qpair failed and we were unable to recover it.
00:36:27.470  [2024-12-14 14:02:27.100735] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:27.470  [2024-12-14 14:02:27.100798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:27.470  [2024-12-14 14:02:27.100823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:27.470  [2024-12-14 14:02:27.100836] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:27.470  [2024-12-14 14:02:27.100847] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:27.470  [2024-12-14 14:02:27.111110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:27.470  qpair failed and we were unable to recover it.
00:36:27.470  [2024-12-14 14:02:27.120857] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:27.470  [2024-12-14 14:02:27.120912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:27.470  [2024-12-14 14:02:27.120946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:27.470  [2024-12-14 14:02:27.120960] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:27.470  [2024-12-14 14:02:27.120972] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:27.470  [2024-12-14 14:02:27.131140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:27.470  qpair failed and we were unable to recover it.
00:36:27.470  [2024-12-14 14:02:27.140909] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:27.470  [2024-12-14 14:02:27.140975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:27.470  [2024-12-14 14:02:27.140999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:27.470  [2024-12-14 14:02:27.141017] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:27.470  [2024-12-14 14:02:27.141029] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:27.470  [2024-12-14 14:02:27.151235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:27.470  qpair failed and we were unable to recover it.
00:36:27.470  [2024-12-14 14:02:27.160925] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:27.470  [2024-12-14 14:02:27.160988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:27.470  [2024-12-14 14:02:27.161012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:27.470  [2024-12-14 14:02:27.161026] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:27.470  [2024-12-14 14:02:27.161037] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:27.470  [2024-12-14 14:02:27.171195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:27.470  qpair failed and we were unable to recover it.
00:36:27.470  [2024-12-14 14:02:27.181083] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:27.470  [2024-12-14 14:02:27.181140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:27.470  [2024-12-14 14:02:27.181164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:27.470  [2024-12-14 14:02:27.181178] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:27.470  [2024-12-14 14:02:27.181190] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:27.470  [2024-12-14 14:02:27.191366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:27.470  qpair failed and we were unable to recover it.
00:36:27.470  [2024-12-14 14:02:27.200999] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:27.470  [2024-12-14 14:02:27.201057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:27.470  [2024-12-14 14:02:27.201081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:27.470  [2024-12-14 14:02:27.201095] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:27.470  [2024-12-14 14:02:27.201106] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:27.730  [2024-12-14 14:02:27.211141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:27.730  qpair failed and we were unable to recover it.
00:36:27.730  [2024-12-14 14:02:27.221028] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:27.730  [2024-12-14 14:02:27.221086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:27.730  [2024-12-14 14:02:27.221110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:27.730  [2024-12-14 14:02:27.221124] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:27.730  [2024-12-14 14:02:27.221143] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:27.730  [2024-12-14 14:02:27.231500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:27.730  qpair failed and we were unable to recover it.
00:36:27.730  [2024-12-14 14:02:27.243179] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:27.730  [2024-12-14 14:02:27.243243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:27.730  [2024-12-14 14:02:27.243268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:27.730  [2024-12-14 14:02:27.243281] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:27.730  [2024-12-14 14:02:27.243293] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:27.730  [2024-12-14 14:02:27.251319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:27.730  qpair failed and we were unable to recover it.
00:36:27.730  [2024-12-14 14:02:27.261254] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:27.730  [2024-12-14 14:02:27.261313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:27.730  [2024-12-14 14:02:27.261337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:27.730  [2024-12-14 14:02:27.261351] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:27.730  [2024-12-14 14:02:27.261363] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:27.730  [2024-12-14 14:02:27.271507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:27.730  qpair failed and we were unable to recover it.
00:36:27.730  [2024-12-14 14:02:27.281341] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:27.730  [2024-12-14 14:02:27.281404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:27.730  [2024-12-14 14:02:27.281428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:27.730  [2024-12-14 14:02:27.281441] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:27.730  [2024-12-14 14:02:27.281453] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:27.730  [2024-12-14 14:02:27.291445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:27.730  qpair failed and we were unable to recover it.
00:36:27.730  [2024-12-14 14:02:27.301424] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:27.730  [2024-12-14 14:02:27.301485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:27.730  [2024-12-14 14:02:27.301509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:27.730  [2024-12-14 14:02:27.301523] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:27.730  [2024-12-14 14:02:27.301535] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:27.730  [2024-12-14 14:02:27.311594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:27.730  qpair failed and we were unable to recover it.
00:36:27.730  [2024-12-14 14:02:27.321507] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:27.730  [2024-12-14 14:02:27.321560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:27.730  [2024-12-14 14:02:27.321584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:27.730  [2024-12-14 14:02:27.321598] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:27.730  [2024-12-14 14:02:27.321609] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:27.730  [2024-12-14 14:02:27.331560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:27.730  qpair failed and we were unable to recover it.
00:36:27.730  [2024-12-14 14:02:27.341505] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:27.731  [2024-12-14 14:02:27.341562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:27.731  [2024-12-14 14:02:27.341586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:27.731  [2024-12-14 14:02:27.341599] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:27.731  [2024-12-14 14:02:27.341611] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:27.731  [2024-12-14 14:02:27.351829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:27.731  qpair failed and we were unable to recover it.
00:36:27.731  [2024-12-14 14:02:27.361481] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:27.731  [2024-12-14 14:02:27.361541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:27.731  [2024-12-14 14:02:27.361566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:27.731  [2024-12-14 14:02:27.361579] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:27.731  [2024-12-14 14:02:27.361590] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:27.731  [2024-12-14 14:02:27.372071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:27.731  qpair failed and we were unable to recover it.
00:36:27.731  [2024-12-14 14:02:27.381644] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:27.731  [2024-12-14 14:02:27.381702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:27.731  [2024-12-14 14:02:27.381726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:27.731  [2024-12-14 14:02:27.381739] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:27.731  [2024-12-14 14:02:27.381751] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:27.731  [2024-12-14 14:02:27.393577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:27.731  qpair failed and we were unable to recover it.
00:36:27.731  [2024-12-14 14:02:27.401622] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:27.731  [2024-12-14 14:02:27.401682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:27.731  [2024-12-14 14:02:27.401710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:27.731  [2024-12-14 14:02:27.401724] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:27.731  [2024-12-14 14:02:27.401735] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:27.731  [2024-12-14 14:02:27.411787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:27.731  qpair failed and we were unable to recover it.
00:36:27.731  [2024-12-14 14:02:27.421780] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:27.731  [2024-12-14 14:02:27.421842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:27.731  [2024-12-14 14:02:27.421867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:27.731  [2024-12-14 14:02:27.421881] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:27.731  [2024-12-14 14:02:27.421892] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:27.731  [2024-12-14 14:02:27.432012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:27.731  qpair failed and we were unable to recover it.
00:36:27.731  [2024-12-14 14:02:27.441764] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:27.731  [2024-12-14 14:02:27.441818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:27.731  [2024-12-14 14:02:27.441842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:27.731  [2024-12-14 14:02:27.441856] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:27.731  [2024-12-14 14:02:27.441868] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:27.731  [2024-12-14 14:02:27.451743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:27.731  qpair failed and we were unable to recover it.
00:36:27.731  [2024-12-14 14:02:27.461853] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:27.731  [2024-12-14 14:02:27.461914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:27.731  [2024-12-14 14:02:27.461953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:27.731  [2024-12-14 14:02:27.461967] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:27.731  [2024-12-14 14:02:27.461979] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:27.991  [2024-12-14 14:02:27.472063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:27.991  qpair failed and we were unable to recover it.
00:36:27.991  [2024-12-14 14:02:27.481850] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:27.991  [2024-12-14 14:02:27.481912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:27.991  [2024-12-14 14:02:27.481943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:27.991  [2024-12-14 14:02:27.481957] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:27.991  [2024-12-14 14:02:27.481980] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:27.991  [2024-12-14 14:02:27.492106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:27.991  qpair failed and we were unable to recover it.
00:36:27.991  [2024-12-14 14:02:27.502039] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:27.991  [2024-12-14 14:02:27.502095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:27.991  [2024-12-14 14:02:27.502119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:27.991  [2024-12-14 14:02:27.502132] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:27.991  [2024-12-14 14:02:27.502143] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:27.991  [2024-12-14 14:02:27.512173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:27.991  qpair failed and we were unable to recover it.
00:36:27.991  [2024-12-14 14:02:27.521998] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:27.991  [2024-12-14 14:02:27.522059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:27.991  [2024-12-14 14:02:27.522083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:27.991  [2024-12-14 14:02:27.522098] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:27.991  [2024-12-14 14:02:27.522109] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:27.991  [2024-12-14 14:02:27.532107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:27.991  qpair failed and we were unable to recover it.
00:36:27.991  [2024-12-14 14:02:27.542519] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:27.991  [2024-12-14 14:02:27.542576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:27.991  [2024-12-14 14:02:27.542601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:27.991  [2024-12-14 14:02:27.542615] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:27.991  [2024-12-14 14:02:27.542627] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:27.991  [2024-12-14 14:02:27.552739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:27.991  qpair failed and we were unable to recover it.
00:36:27.991  [2024-12-14 14:02:27.562163] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:27.991  [2024-12-14 14:02:27.562225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:27.991  [2024-12-14 14:02:27.562250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:27.991  [2024-12-14 14:02:27.562264] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:27.991  [2024-12-14 14:02:27.562283] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:27.991  [2024-12-14 14:02:27.572446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:27.991  qpair failed and we were unable to recover it.
00:36:27.991  [2024-12-14 14:02:27.582267] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:27.991  [2024-12-14 14:02:27.582328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:27.991  [2024-12-14 14:02:27.582352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:27.991  [2024-12-14 14:02:27.582365] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:27.991  [2024-12-14 14:02:27.582377] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:27.991  [2024-12-14 14:02:27.592500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:27.991  qpair failed and we were unable to recover it.
00:36:27.991  [2024-12-14 14:02:27.602165] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:27.991  [2024-12-14 14:02:27.602226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:27.991  [2024-12-14 14:02:27.602250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:27.991  [2024-12-14 14:02:27.602264] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:27.991  [2024-12-14 14:02:27.602275] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:27.991  [2024-12-14 14:02:27.612340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:27.991  qpair failed and we were unable to recover it.
00:36:27.991  [2024-12-14 14:02:27.622404] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:27.991  [2024-12-14 14:02:27.622465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:27.991  [2024-12-14 14:02:27.622489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:27.991  [2024-12-14 14:02:27.622502] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:27.991  [2024-12-14 14:02:27.622513] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:27.991  [2024-12-14 14:02:27.632503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:27.991  qpair failed and we were unable to recover it.
00:36:27.991  [2024-12-14 14:02:27.642436] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:27.991  [2024-12-14 14:02:27.642496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:27.991  [2024-12-14 14:02:27.642519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:27.991  [2024-12-14 14:02:27.642533] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:27.991  [2024-12-14 14:02:27.642544] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:27.991  [2024-12-14 14:02:27.652744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:27.991  qpair failed and we were unable to recover it.
00:36:27.991  [2024-12-14 14:02:27.662392] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:27.991  [2024-12-14 14:02:27.662453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:27.991  [2024-12-14 14:02:27.662476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:27.991  [2024-12-14 14:02:27.662490] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:27.991  [2024-12-14 14:02:27.662501] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:27.991  [2024-12-14 14:02:27.672914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:27.991  qpair failed and we were unable to recover it.
00:36:27.991  [2024-12-14 14:02:27.682455] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:27.991  [2024-12-14 14:02:27.682519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:27.991  [2024-12-14 14:02:27.682543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:27.991  [2024-12-14 14:02:27.682557] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:27.991  [2024-12-14 14:02:27.682568] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:27.991  [2024-12-14 14:02:27.693117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:27.991  qpair failed and we were unable to recover it.
00:36:27.991  [2024-12-14 14:02:27.702641] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:27.991  [2024-12-14 14:02:27.702697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:27.991  [2024-12-14 14:02:27.702721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:27.991  [2024-12-14 14:02:27.702734] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:27.991  [2024-12-14 14:02:27.702746] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:27.991  [2024-12-14 14:02:27.712833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:27.991  qpair failed and we were unable to recover it.
00:36:27.991  [2024-12-14 14:02:27.722570] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:27.991  [2024-12-14 14:02:27.722629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:27.992  [2024-12-14 14:02:27.722652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:27.992  [2024-12-14 14:02:27.722666] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:27.992  [2024-12-14 14:02:27.722678] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:28.251  [2024-12-14 14:02:27.732741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:28.251  qpair failed and we were unable to recover it.
00:36:28.251  [2024-12-14 14:02:27.742692] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:28.251  [2024-12-14 14:02:27.742744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:28.251  [2024-12-14 14:02:27.742772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:28.251  [2024-12-14 14:02:27.742786] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:28.251  [2024-12-14 14:02:27.742797] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:28.251  [2024-12-14 14:02:27.753151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:28.251  qpair failed and we were unable to recover it.
00:36:28.251  [2024-12-14 14:02:27.762683] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:28.251  [2024-12-14 14:02:27.762738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:28.251  [2024-12-14 14:02:27.762762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:28.251  [2024-12-14 14:02:27.762776] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:28.251  [2024-12-14 14:02:27.762789] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:28.251  [2024-12-14 14:02:27.773077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:28.251  qpair failed and we were unable to recover it.
00:36:28.251  [2024-12-14 14:02:27.782759] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:28.251  [2024-12-14 14:02:27.782816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:28.251  [2024-12-14 14:02:27.782840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:28.251  [2024-12-14 14:02:27.782853] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:28.251  [2024-12-14 14:02:27.782864] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:28.252  [2024-12-14 14:02:27.793216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:28.252  qpair failed and we were unable to recover it.
00:36:28.252  [2024-12-14 14:02:27.802886] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:28.252  [2024-12-14 14:02:27.802951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:28.252  [2024-12-14 14:02:27.802976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:28.252  [2024-12-14 14:02:27.802989] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:28.252  [2024-12-14 14:02:27.803000] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:28.252  [2024-12-14 14:02:27.813055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:28.252  qpair failed and we were unable to recover it.
00:36:28.252  [2024-12-14 14:02:27.822862] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:28.252  [2024-12-14 14:02:27.822920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:28.252  [2024-12-14 14:02:27.822958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:28.252  [2024-12-14 14:02:27.822972] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:28.252  [2024-12-14 14:02:27.822987] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:28.252  [2024-12-14 14:02:27.833076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:28.252  qpair failed and we were unable to recover it.
00:36:28.252  [2024-12-14 14:02:27.842935] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:28.252  [2024-12-14 14:02:27.842994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:28.252  [2024-12-14 14:02:27.843018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:28.252  [2024-12-14 14:02:27.843032] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:28.252  [2024-12-14 14:02:27.843044] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:28.252  [2024-12-14 14:02:27.853094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:28.252  qpair failed and we were unable to recover it.
00:36:28.252  [2024-12-14 14:02:27.863001] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:28.252  [2024-12-14 14:02:27.863057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:28.252  [2024-12-14 14:02:27.863082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:28.252  [2024-12-14 14:02:27.863095] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:28.252  [2024-12-14 14:02:27.863107] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:28.252  [2024-12-14 14:02:27.873155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:28.252  qpair failed and we were unable to recover it.
00:36:28.252  [2024-12-14 14:02:27.883079] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:28.252  [2024-12-14 14:02:27.883138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:28.252  [2024-12-14 14:02:27.883162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:28.252  [2024-12-14 14:02:27.883175] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:28.252  [2024-12-14 14:02:27.883187] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:28.252  [2024-12-14 14:02:27.893344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:28.252  qpair failed and we were unable to recover it.
00:36:28.252  [2024-12-14 14:02:27.903204] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:28.252  [2024-12-14 14:02:27.903275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:28.252  [2024-12-14 14:02:27.903299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:28.252  [2024-12-14 14:02:27.903313] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:28.252  [2024-12-14 14:02:27.903324] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:28.252  [2024-12-14 14:02:27.913442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:28.252  qpair failed and we were unable to recover it.
00:36:28.252  [2024-12-14 14:02:27.923210] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:28.252  [2024-12-14 14:02:27.923268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:28.252  [2024-12-14 14:02:27.923293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:28.252  [2024-12-14 14:02:27.923306] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:28.252  [2024-12-14 14:02:27.923318] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:28.252  [2024-12-14 14:02:27.933663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:28.252  qpair failed and we were unable to recover it.
00:36:28.252  [2024-12-14 14:02:27.943232] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:28.252  [2024-12-14 14:02:27.943296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:28.252  [2024-12-14 14:02:27.943320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:28.252  [2024-12-14 14:02:27.943334] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:28.252  [2024-12-14 14:02:27.943345] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:28.252  [2024-12-14 14:02:27.953842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:28.252  qpair failed and we were unable to recover it.
00:36:28.252  [2024-12-14 14:02:27.963308] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:28.252  [2024-12-14 14:02:27.963365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:28.252  [2024-12-14 14:02:27.963390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:28.252  [2024-12-14 14:02:27.963404] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:28.252  [2024-12-14 14:02:27.963416] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:28.252  [2024-12-14 14:02:27.973595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:28.252  qpair failed and we were unable to recover it.
00:36:28.252  [2024-12-14 14:02:27.983341] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:28.252  [2024-12-14 14:02:27.983395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:28.252  [2024-12-14 14:02:27.983419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:28.252  [2024-12-14 14:02:27.983432] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:28.252  [2024-12-14 14:02:27.983444] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:28.512  [2024-12-14 14:02:27.993813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:28.512  qpair failed and we were unable to recover it.
00:36:28.512  [2024-12-14 14:02:28.003438] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:28.512  [2024-12-14 14:02:28.003502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:28.512  [2024-12-14 14:02:28.003526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:28.512  [2024-12-14 14:02:28.003539] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:28.512  [2024-12-14 14:02:28.003551] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:28.512  [2024-12-14 14:02:28.013637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:28.512  qpair failed and we were unable to recover it.
00:36:28.512  [2024-12-14 14:02:28.023599] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:28.512  [2024-12-14 14:02:28.023655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:28.512  [2024-12-14 14:02:28.023679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:28.512  [2024-12-14 14:02:28.023692] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:28.512  [2024-12-14 14:02:28.023703] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:28.512  [2024-12-14 14:02:28.033871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:28.512  qpair failed and we were unable to recover it.
00:36:28.512  [2024-12-14 14:02:28.043599] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:28.512  [2024-12-14 14:02:28.043654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:28.512  [2024-12-14 14:02:28.043678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:28.512  [2024-12-14 14:02:28.043692] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:28.512  [2024-12-14 14:02:28.043703] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:28.512  [2024-12-14 14:02:28.053780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:28.512  qpair failed and we were unable to recover it.
00:36:28.512  [2024-12-14 14:02:28.063638] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:28.512  [2024-12-14 14:02:28.063701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:28.512  [2024-12-14 14:02:28.063726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:28.512  [2024-12-14 14:02:28.063739] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:28.512  [2024-12-14 14:02:28.063751] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:28.512  [2024-12-14 14:02:28.073836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:28.512  qpair failed and we were unable to recover it.
00:36:28.512  [2024-12-14 14:02:28.083771] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:28.512  [2024-12-14 14:02:28.083833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:28.512  [2024-12-14 14:02:28.083861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:28.512  [2024-12-14 14:02:28.083875] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:28.512  [2024-12-14 14:02:28.083887] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:28.512  [2024-12-14 14:02:28.093878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:28.512  qpair failed and we were unable to recover it.
00:36:28.512  [2024-12-14 14:02:28.103744] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:28.512  [2024-12-14 14:02:28.103802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:28.512  [2024-12-14 14:02:28.103826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:28.512  [2024-12-14 14:02:28.103839] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:28.512  [2024-12-14 14:02:28.103852] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:28.512  [2024-12-14 14:02:28.114026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:28.512  qpair failed and we were unable to recover it.
00:36:28.512  [2024-12-14 14:02:28.123762] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:28.512  [2024-12-14 14:02:28.123820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:28.512  [2024-12-14 14:02:28.123843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:28.512  [2024-12-14 14:02:28.123857] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:28.512  [2024-12-14 14:02:28.123868] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:28.513  [2024-12-14 14:02:28.134064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:28.513  qpair failed and we were unable to recover it.
00:36:28.513  [2024-12-14 14:02:28.143952] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:28.513  [2024-12-14 14:02:28.144012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:28.513  [2024-12-14 14:02:28.144037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:28.513  [2024-12-14 14:02:28.144051] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:28.513  [2024-12-14 14:02:28.144062] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:28.513  [2024-12-14 14:02:28.154116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:28.513  qpair failed and we were unable to recover it.
00:36:28.513  [2024-12-14 14:02:28.163917] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:28.513  [2024-12-14 14:02:28.163978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:28.513  [2024-12-14 14:02:28.164002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:28.513  [2024-12-14 14:02:28.164016] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:28.513  [2024-12-14 14:02:28.164032] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:28.513  [2024-12-14 14:02:28.174066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:28.513  qpair failed and we were unable to recover it.
00:36:28.513  [2024-12-14 14:02:28.184131] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:28.513  [2024-12-14 14:02:28.184189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:28.513  [2024-12-14 14:02:28.184214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:28.513  [2024-12-14 14:02:28.184227] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:28.513  [2024-12-14 14:02:28.184238] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:28.513  [2024-12-14 14:02:28.194686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:28.513  qpair failed and we were unable to recover it.
00:36:28.513  [2024-12-14 14:02:28.204081] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:28.513  [2024-12-14 14:02:28.204139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:28.513  [2024-12-14 14:02:28.204163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:28.513  [2024-12-14 14:02:28.204176] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:28.513  [2024-12-14 14:02:28.204188] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:28.513  [2024-12-14 14:02:28.214284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:28.513  qpair failed and we were unable to recover it.
00:36:28.513  [2024-12-14 14:02:28.224103] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:28.513  [2024-12-14 14:02:28.224164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:28.513  [2024-12-14 14:02:28.224188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:28.513  [2024-12-14 14:02:28.224201] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:28.513  [2024-12-14 14:02:28.224213] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:28.513  [2024-12-14 14:02:28.234446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:28.513  qpair failed and we were unable to recover it.
00:36:28.513  [2024-12-14 14:02:28.244164] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:28.513  [2024-12-14 14:02:28.244222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:28.513  [2024-12-14 14:02:28.244246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:28.513  [2024-12-14 14:02:28.244259] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:28.513  [2024-12-14 14:02:28.244271] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:28.773  [2024-12-14 14:02:28.254374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:28.773  qpair failed and we were unable to recover it.
00:36:28.773  [2024-12-14 14:02:28.264157] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:28.773  [2024-12-14 14:02:28.264215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:28.773  [2024-12-14 14:02:28.264239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:28.773  [2024-12-14 14:02:28.264253] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:28.773  [2024-12-14 14:02:28.264265] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:28.773  [2024-12-14 14:02:28.274357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:28.773  qpair failed and we were unable to recover it.
00:36:28.773  [2024-12-14 14:02:28.284340] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:28.773  [2024-12-14 14:02:28.284400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:28.773  [2024-12-14 14:02:28.284425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:28.773  [2024-12-14 14:02:28.284438] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:28.773  [2024-12-14 14:02:28.284450] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:28.773  [2024-12-14 14:02:28.294535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:28.773  qpair failed and we were unable to recover it.
00:36:28.773  [2024-12-14 14:02:28.304300] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:28.773  [2024-12-14 14:02:28.304360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:28.773  [2024-12-14 14:02:28.304384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:28.773  [2024-12-14 14:02:28.304397] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:28.773  [2024-12-14 14:02:28.304408] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:28.773  [2024-12-14 14:02:28.314608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:28.773  qpair failed and we were unable to recover it.
00:36:28.773  [2024-12-14 14:02:28.324433] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:28.773  [2024-12-14 14:02:28.324497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:28.773  [2024-12-14 14:02:28.324521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:28.773  [2024-12-14 14:02:28.324534] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:28.773  [2024-12-14 14:02:28.324546] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:28.773  [2024-12-14 14:02:28.334445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:28.773  qpair failed and we were unable to recover it.
00:36:28.773  [2024-12-14 14:02:28.344449] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:28.773  [2024-12-14 14:02:28.344510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:28.773  [2024-12-14 14:02:28.344534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:28.773  [2024-12-14 14:02:28.344548] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:28.773  [2024-12-14 14:02:28.344559] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:28.773  [2024-12-14 14:02:28.354808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:28.773  qpair failed and we were unable to recover it.
00:36:28.773  [2024-12-14 14:02:28.364553] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:28.773  [2024-12-14 14:02:28.364612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:28.773  [2024-12-14 14:02:28.364636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:28.773  [2024-12-14 14:02:28.364649] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:28.773  [2024-12-14 14:02:28.364661] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:28.773  [2024-12-14 14:02:28.374727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:28.773  qpair failed and we were unable to recover it.
00:36:28.773  [2024-12-14 14:02:28.384515] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:28.773  [2024-12-14 14:02:28.384575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:28.773  [2024-12-14 14:02:28.384598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:28.773  [2024-12-14 14:02:28.384612] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:28.773  [2024-12-14 14:02:28.384624] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:28.773  [2024-12-14 14:02:28.394823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:28.773  qpair failed and we were unable to recover it.
00:36:28.773  [2024-12-14 14:02:28.404530] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:28.773  [2024-12-14 14:02:28.404592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:28.773  [2024-12-14 14:02:28.404616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:28.773  [2024-12-14 14:02:28.404629] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:28.773  [2024-12-14 14:02:28.404641] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:28.773  [2024-12-14 14:02:28.414665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:28.773  qpair failed and we were unable to recover it.
00:36:28.773  [2024-12-14 14:02:28.424662] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:28.773  [2024-12-14 14:02:28.424718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:28.773  [2024-12-14 14:02:28.424741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:28.773  [2024-12-14 14:02:28.424760] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:28.773  [2024-12-14 14:02:28.424771] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:28.773  [2024-12-14 14:02:28.437544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:28.773  qpair failed and we were unable to recover it.
00:36:28.773  [2024-12-14 14:02:28.444745] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:28.773  [2024-12-14 14:02:28.444798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:28.773  [2024-12-14 14:02:28.444822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:28.773  [2024-12-14 14:02:28.444836] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:28.773  [2024-12-14 14:02:28.444847] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:28.773  [2024-12-14 14:02:28.455001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:28.773  qpair failed and we were unable to recover it.
00:36:28.773  [2024-12-14 14:02:28.464783] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:28.773  [2024-12-14 14:02:28.464835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:28.773  [2024-12-14 14:02:28.464860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:28.773  [2024-12-14 14:02:28.464873] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:28.773  [2024-12-14 14:02:28.464885] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:28.774  [2024-12-14 14:02:28.475248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:28.774  qpair failed and we were unable to recover it.
00:36:28.774  [2024-12-14 14:02:28.484821] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:28.774  [2024-12-14 14:02:28.484877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:28.774  [2024-12-14 14:02:28.484901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:28.774  [2024-12-14 14:02:28.484914] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:28.774  [2024-12-14 14:02:28.484926] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:28.774  [2024-12-14 14:02:28.494994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:28.774  qpair failed and we were unable to recover it.
00:36:28.774  [2024-12-14 14:02:28.504872] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:28.774  [2024-12-14 14:02:28.504935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:28.774  [2024-12-14 14:02:28.504959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:28.774  [2024-12-14 14:02:28.504974] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:28.774  [2024-12-14 14:02:28.504985] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:29.033  [2024-12-14 14:02:28.515138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:29.033  qpair failed and we were unable to recover it.
00:36:29.033  [2024-12-14 14:02:28.525078] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:29.033  [2024-12-14 14:02:28.525143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:29.033  [2024-12-14 14:02:28.525168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:29.033  [2024-12-14 14:02:28.525182] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:29.033  [2024-12-14 14:02:28.525193] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:29.033  [2024-12-14 14:02:28.535374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:29.033  qpair failed and we were unable to recover it.
00:36:29.033  [2024-12-14 14:02:28.545067] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:29.033  [2024-12-14 14:02:28.545128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:29.033  [2024-12-14 14:02:28.545152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:29.033  [2024-12-14 14:02:28.545165] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:29.033  [2024-12-14 14:02:28.545177] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:29.033  [2024-12-14 14:02:28.555388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:29.033  qpair failed and we were unable to recover it.
00:36:29.033  [2024-12-14 14:02:28.565086] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:29.033  [2024-12-14 14:02:28.565144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:29.033  [2024-12-14 14:02:28.565168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:29.033  [2024-12-14 14:02:28.565181] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:29.033  [2024-12-14 14:02:28.565194] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:29.033  [2024-12-14 14:02:28.575463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:29.033  qpair failed and we were unable to recover it.
00:36:29.033  [2024-12-14 14:02:28.587315] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:29.033  [2024-12-14 14:02:28.587374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:29.033  [2024-12-14 14:02:28.587399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:29.033  [2024-12-14 14:02:28.587413] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:29.033  [2024-12-14 14:02:28.587425] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:29.033  [2024-12-14 14:02:28.595494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:29.033  qpair failed and we were unable to recover it.
00:36:29.033  [2024-12-14 14:02:28.605185] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:29.033  [2024-12-14 14:02:28.605239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:29.033  [2024-12-14 14:02:28.605263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:29.033  [2024-12-14 14:02:28.605277] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:29.033  [2024-12-14 14:02:28.605288] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:29.033  [2024-12-14 14:02:28.615489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:29.033  qpair failed and we were unable to recover it.
00:36:29.033  [2024-12-14 14:02:28.625244] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:29.033  [2024-12-14 14:02:28.625302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:29.033  [2024-12-14 14:02:28.625327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:29.033  [2024-12-14 14:02:28.625341] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:29.033  [2024-12-14 14:02:28.625353] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:29.033  [2024-12-14 14:02:28.635515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:29.033  qpair failed and we were unable to recover it.
00:36:29.033  [2024-12-14 14:02:28.645376] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:29.033  [2024-12-14 14:02:28.645435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:29.033  [2024-12-14 14:02:28.645459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:29.033  [2024-12-14 14:02:28.645472] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:29.033  [2024-12-14 14:02:28.645484] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:29.033  [2024-12-14 14:02:28.655498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:29.033  qpair failed and we were unable to recover it.
00:36:29.034  [2024-12-14 14:02:28.665559] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:29.034  [2024-12-14 14:02:28.665617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:29.034  [2024-12-14 14:02:28.665641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:29.034  [2024-12-14 14:02:28.665655] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:29.034  [2024-12-14 14:02:28.665666] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:29.034  [2024-12-14 14:02:28.675926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:29.034  qpair failed and we were unable to recover it.
00:36:29.034  [2024-12-14 14:02:28.685542] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:29.034  [2024-12-14 14:02:28.685593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:29.034  [2024-12-14 14:02:28.685621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:29.034  [2024-12-14 14:02:28.685634] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:29.034  [2024-12-14 14:02:28.685646] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:29.034  [2024-12-14 14:02:28.695776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:29.034  qpair failed and we were unable to recover it.
00:36:29.034  [2024-12-14 14:02:28.705654] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:29.034  [2024-12-14 14:02:28.705715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:29.034  [2024-12-14 14:02:28.705739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:29.034  [2024-12-14 14:02:28.705753] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:29.034  [2024-12-14 14:02:28.705765] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:29.034  [2024-12-14 14:02:28.715858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:29.034  qpair failed and we were unable to recover it.
00:36:29.034  [2024-12-14 14:02:28.725782] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:29.034  [2024-12-14 14:02:28.725835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:29.034  [2024-12-14 14:02:28.725859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:29.034  [2024-12-14 14:02:28.725873] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:29.034  [2024-12-14 14:02:28.725885] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:29.034  [2024-12-14 14:02:28.735888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:29.034  qpair failed and we were unable to recover it.
00:36:29.034  [2024-12-14 14:02:28.745691] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:29.034  [2024-12-14 14:02:28.745748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:29.034  [2024-12-14 14:02:28.745772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:29.034  [2024-12-14 14:02:28.745785] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:29.034  [2024-12-14 14:02:28.745797] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:29.034  [2024-12-14 14:02:28.755979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:29.034  qpair failed and we were unable to recover it.
00:36:29.034  [2024-12-14 14:02:28.765788] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:29.034  [2024-12-14 14:02:28.765848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:29.034  [2024-12-14 14:02:28.765872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:29.034  [2024-12-14 14:02:28.765889] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:29.034  [2024-12-14 14:02:28.765900] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:29.293  [2024-12-14 14:02:28.776313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:29.293  qpair failed and we were unable to recover it.
00:36:29.293  [2024-12-14 14:02:28.785816] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:29.293  [2024-12-14 14:02:28.785874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:29.293  [2024-12-14 14:02:28.785898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:29.293  [2024-12-14 14:02:28.785912] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:29.293  [2024-12-14 14:02:28.785923] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:29.293  [2024-12-14 14:02:28.795994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:29.293  qpair failed and we were unable to recover it.
00:36:29.293  [2024-12-14 14:02:28.805918] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:29.293  [2024-12-14 14:02:28.805979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:29.293  [2024-12-14 14:02:28.806004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:29.293  [2024-12-14 14:02:28.806017] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:29.293  [2024-12-14 14:02:28.806028] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:29.293  [2024-12-14 14:02:28.816241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:29.293  qpair failed and we were unable to recover it.
00:36:29.293  [2024-12-14 14:02:28.825906] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:29.293  [2024-12-14 14:02:28.825968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:29.293  [2024-12-14 14:02:28.825992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:29.293  [2024-12-14 14:02:28.826006] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:29.293  [2024-12-14 14:02:28.826017] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:29.293  [2024-12-14 14:02:28.836814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:29.293  qpair failed and we were unable to recover it.
00:36:29.293  [2024-12-14 14:02:28.846023] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:29.293  [2024-12-14 14:02:28.846085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:29.293  [2024-12-14 14:02:28.846109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:29.293  [2024-12-14 14:02:28.846123] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:29.293  [2024-12-14 14:02:28.846135] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:29.293  [2024-12-14 14:02:28.856336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:29.293  qpair failed and we were unable to recover it.
00:36:29.293  [2024-12-14 14:02:28.866106] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:29.293  [2024-12-14 14:02:28.866165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:29.293  [2024-12-14 14:02:28.866188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:29.293  [2024-12-14 14:02:28.866201] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:29.293  [2024-12-14 14:02:28.866212] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:29.293  [2024-12-14 14:02:28.876356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:29.293  qpair failed and we were unable to recover it.
00:36:29.293  [2024-12-14 14:02:28.887768] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:29.293  [2024-12-14 14:02:28.887831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:29.293  [2024-12-14 14:02:28.887856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:29.293  [2024-12-14 14:02:28.887870] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:29.293  [2024-12-14 14:02:28.887882] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:29.293  [2024-12-14 14:02:28.896550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:29.293  qpair failed and we were unable to recover it.
00:36:29.293  [2024-12-14 14:02:28.906232] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:29.293  [2024-12-14 14:02:28.906294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:29.293  [2024-12-14 14:02:28.906318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:29.293  [2024-12-14 14:02:28.906332] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:29.294  [2024-12-14 14:02:28.906343] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:29.294  [2024-12-14 14:02:28.916587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:29.294  qpair failed and we were unable to recover it.
00:36:29.294  [2024-12-14 14:02:28.926535] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:29.294  [2024-12-14 14:02:28.926591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:29.294  [2024-12-14 14:02:28.926614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:29.294  [2024-12-14 14:02:28.926628] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:29.294  [2024-12-14 14:02:28.926639] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:29.294  [2024-12-14 14:02:28.936541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:29.294  qpair failed and we were unable to recover it.
00:36:29.294  [2024-12-14 14:02:28.946313] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:29.294  [2024-12-14 14:02:28.946373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:29.294  [2024-12-14 14:02:28.946397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:29.294  [2024-12-14 14:02:28.946410] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:29.294  [2024-12-14 14:02:28.946422] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:29.294  [2024-12-14 14:02:28.956720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:29.294  qpair failed and we were unable to recover it.
00:36:29.294  [2024-12-14 14:02:28.966424] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:29.294  [2024-12-14 14:02:28.966484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:29.294  [2024-12-14 14:02:28.966508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:29.294  [2024-12-14 14:02:28.966521] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:29.294  [2024-12-14 14:02:28.966532] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:29.294  [2024-12-14 14:02:28.976712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:29.294  qpair failed and we were unable to recover it.
00:36:29.294  [2024-12-14 14:02:28.986526] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:29.294  [2024-12-14 14:02:28.986584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:29.294  [2024-12-14 14:02:28.986608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:29.294  [2024-12-14 14:02:28.986621] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:29.294  [2024-12-14 14:02:28.986632] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:29.294  [2024-12-14 14:02:28.996742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:29.294  qpair failed and we were unable to recover it.
00:36:29.294  [2024-12-14 14:02:29.006650] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:29.294  [2024-12-14 14:02:29.006708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:29.294  [2024-12-14 14:02:29.006731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:29.294  [2024-12-14 14:02:29.006746] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:29.294  [2024-12-14 14:02:29.006758] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:29.294  [2024-12-14 14:02:29.016814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:29.294  qpair failed and we were unable to recover it.
00:36:29.294  [2024-12-14 14:02:29.026706] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:29.294  [2024-12-14 14:02:29.026764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:29.294  [2024-12-14 14:02:29.026793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:29.294  [2024-12-14 14:02:29.026808] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:29.294  [2024-12-14 14:02:29.026820] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040
00:36:29.553  [2024-12-14 14:02:29.037569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:36:29.553  qpair failed and we were unable to recover it.
00:36:29.553  [2024-12-14 14:02:29.046831] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:29.553  [2024-12-14 14:02:29.046909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:29.553  [2024-12-14 14:02:29.046958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:29.553  [2024-12-14 14:02:29.046979] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:29.553  [2024-12-14 14:02:29.046999] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1bc0
00:36:29.553  [2024-12-14 14:02:29.057042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3
00:36:29.553  qpair failed and we were unable to recover it.
00:36:29.553  [2024-12-14 14:02:29.066860] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:29.553  [2024-12-14 14:02:29.066937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:29.553  [2024-12-14 14:02:29.066964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:29.553  [2024-12-14 14:02:29.066983] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:29.553  [2024-12-14 14:02:29.066999] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1bc0
00:36:29.553  [2024-12-14 14:02:29.077212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3
00:36:29.553  qpair failed and we were unable to recover it.
00:36:29.553  [2024-12-14 14:02:29.086840] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:29.553  [2024-12-14 14:02:29.086903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:29.553  [2024-12-14 14:02:29.086938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:29.553  [2024-12-14 14:02:29.086954] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:29.553  [2024-12-14 14:02:29.086969] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cdb40
00:36:29.553  [2024-12-14 14:02:29.097308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:36:29.553  qpair failed and we were unable to recover it.
00:36:29.553  [2024-12-14 14:02:29.106813] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:29.553  [2024-12-14 14:02:29.106879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:29.553  [2024-12-14 14:02:29.106904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:29.553  [2024-12-14 14:02:29.106926] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:29.553  [2024-12-14 14:02:29.106953] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cdb40
00:36:29.553  [2024-12-14 14:02:29.117255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:36:29.553  qpair failed and we were unable to recover it.
00:36:29.553  [2024-12-14 14:02:29.117495] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed
00:36:29.553  A controller has encountered a failure and is being reset.
00:36:29.553  [2024-12-14 14:02:29.127133] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:29.553  [2024-12-14 14:02:29.127204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:29.553  [2024-12-14 14:02:29.127241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:29.553  [2024-12-14 14:02:29.127262] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:29.553  [2024-12-14 14:02:29.127281] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d2840
00:36:29.553  [2024-12-14 14:02:29.137293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1
00:36:29.553  qpair failed and we were unable to recover it.
00:36:29.553  [2024-12-14 14:02:29.147079] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:36:29.553  [2024-12-14 14:02:29.147142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:36:29.553  [2024-12-14 14:02:29.147168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:36:29.553  [2024-12-14 14:02:29.147184] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:36:29.553  [2024-12-14 14:02:29.147196] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d2840
00:36:29.553  [2024-12-14 14:02:29.157273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1
00:36:29.553  qpair failed and we were unable to recover it.
00:36:29.553  [2024-12-14 14:02:29.157553] nvme_rdma.c: 570:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0)
00:36:29.553  [2024-12-14 14:02:29.202439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0
00:36:29.553  Controller properly reset.
00:36:29.812  Initializing NVMe Controllers
00:36:29.812  Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1
00:36:29.812  Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1
00:36:29.812  Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0
00:36:29.812  Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1
00:36:29.812  Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2
00:36:29.812  Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3
00:36:29.812  Initialization complete. Launching workers.
00:36:29.812  Starting thread on core 1
00:36:29.812  Starting thread on core 2
00:36:29.812  Starting thread on core 3
00:36:29.812  Starting thread on core 0
00:36:29.812   14:02:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync
00:36:29.812  
00:36:29.812  real	0m12.123s
00:36:29.812  user	0m26.714s
00:36:29.812  sys	0m2.706s
00:36:29.812   14:02:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable
00:36:29.812   14:02:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x
00:36:29.812  ************************************
00:36:29.812  END TEST nvmf_target_disconnect_tc2
00:36:29.812  ************************************
00:36:29.812   14:02:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n 192.168.100.9 ']'
00:36:29.812   14:02:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@73 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3
00:36:29.812   14:02:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:36:29.812   14:02:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable
00:36:29.812   14:02:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x
00:36:29.812  ************************************
00:36:29.812  START TEST nvmf_target_disconnect_tc3
00:36:29.812  ************************************
00:36:29.812   14:02:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc3
00:36:29.812   14:02:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@57 -- # reconnectpid=3548496
00:36:29.812   14:02:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@59 -- # sleep 2
00:36:29.812   14:02:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 alt_traddr:192.168.100.9'
00:36:32.346   14:02:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@60 -- # kill -9 3547326
00:36:32.346   14:02:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@62 -- # sleep 2
00:36:33.282  Write completed with error (sct=0, sc=8)
00:36:33.282  starting I/O failed
00:36:33.282  Write completed with error (sct=0, sc=8)
00:36:33.282  starting I/O failed
00:36:33.282  Write completed with error (sct=0, sc=8)
00:36:33.282  starting I/O failed
00:36:33.282  Write completed with error (sct=0, sc=8)
00:36:33.282  starting I/O failed
00:36:33.282  Read completed with error (sct=0, sc=8)
00:36:33.282  starting I/O failed
00:36:33.282  Write completed with error (sct=0, sc=8)
00:36:33.282  starting I/O failed
00:36:33.282  Read completed with error (sct=0, sc=8)
00:36:33.282  starting I/O failed
00:36:33.282  Write completed with error (sct=0, sc=8)
00:36:33.282  starting I/O failed
00:36:33.282  Write completed with error (sct=0, sc=8)
00:36:33.282  starting I/O failed
00:36:33.282  Read completed with error (sct=0, sc=8)
00:36:33.282  starting I/O failed
00:36:33.282  Read completed with error (sct=0, sc=8)
00:36:33.282  starting I/O failed
00:36:33.282  Read completed with error (sct=0, sc=8)
00:36:33.282  starting I/O failed
00:36:33.282  Write completed with error (sct=0, sc=8)
00:36:33.282  starting I/O failed
00:36:33.282  Read completed with error (sct=0, sc=8)
00:36:33.282  starting I/O failed
00:36:33.282  Write completed with error (sct=0, sc=8)
00:36:33.282  starting I/O failed
00:36:33.282  Write completed with error (sct=0, sc=8)
00:36:33.282  starting I/O failed
00:36:33.282  Read completed with error (sct=0, sc=8)
00:36:33.282  starting I/O failed
00:36:33.282  Read completed with error (sct=0, sc=8)
00:36:33.282  starting I/O failed
00:36:33.282  Read completed with error (sct=0, sc=8)
00:36:33.282  starting I/O failed
00:36:33.282  Read completed with error (sct=0, sc=8)
00:36:33.282  starting I/O failed
00:36:33.282  Write completed with error (sct=0, sc=8)
00:36:33.282  starting I/O failed
00:36:33.282  Read completed with error (sct=0, sc=8)
00:36:33.282  starting I/O failed
00:36:33.282  Read completed with error (sct=0, sc=8)
00:36:33.282  starting I/O failed
00:36:33.282  Read completed with error (sct=0, sc=8)
00:36:33.282  starting I/O failed
00:36:33.282  Read completed with error (sct=0, sc=8)
00:36:33.282  starting I/O failed
00:36:33.282  Write completed with error (sct=0, sc=8)
00:36:33.282  starting I/O failed
00:36:33.282  Read completed with error (sct=0, sc=8)
00:36:33.282  starting I/O failed
00:36:33.282  Write completed with error (sct=0, sc=8)
00:36:33.282  starting I/O failed
00:36:33.282  Read completed with error (sct=0, sc=8)
00:36:33.282  starting I/O failed
00:36:33.282  Write completed with error (sct=0, sc=8)
00:36:33.282  starting I/O failed
00:36:33.282  Write completed with error (sct=0, sc=8)
00:36:33.282  starting I/O failed
00:36:33.282  Read completed with error (sct=0, sc=8)
00:36:33.282  starting I/O failed
00:36:33.282  [2024-12-14 14:02:32.795751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1
00:36:33.850  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 54: 3547326 Killed                  "${NVMF_APP[@]}" "$@"
00:36:33.850   14:02:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@63 -- # disconnect_init 192.168.100.9
00:36:33.850   14:02:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0
00:36:33.850   14:02:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:36:33.850   14:02:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable
00:36:33.850   14:02:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x
00:36:33.850   14:02:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0
00:36:33.850   14:02:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@509 -- # nvmfpid=3549225
00:36:33.850   14:02:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@510 -- # waitforlisten 3549225
00:36:33.850   14:02:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3549225 ']'
00:36:33.850   14:02:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:36:33.850   14:02:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100
00:36:33.850   14:02:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:36:33.850  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:36:33.850   14:02:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable
00:36:33.850   14:02:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x
00:36:34.109  [2024-12-14 14:02:33.610880] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:36:34.109  [2024-12-14 14:02:33.610989] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:36:34.109  [2024-12-14 14:02:33.772530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:36:34.109  Write completed with error (sct=0, sc=8)
00:36:34.109  starting I/O failed
00:36:34.109  Read completed with error (sct=0, sc=8)
00:36:34.109  starting I/O failed
00:36:34.109  Read completed with error (sct=0, sc=8)
00:36:34.109  starting I/O failed
00:36:34.109  Write completed with error (sct=0, sc=8)
00:36:34.109  starting I/O failed
00:36:34.109  Write completed with error (sct=0, sc=8)
00:36:34.109  starting I/O failed
00:36:34.109  Write completed with error (sct=0, sc=8)
00:36:34.109  starting I/O failed
00:36:34.109  Write completed with error (sct=0, sc=8)
00:36:34.109  starting I/O failed
00:36:34.109  Read completed with error (sct=0, sc=8)
00:36:34.109  starting I/O failed
00:36:34.109  Write completed with error (sct=0, sc=8)
00:36:34.109  starting I/O failed
00:36:34.109  Write completed with error (sct=0, sc=8)
00:36:34.109  starting I/O failed
00:36:34.109  Read completed with error (sct=0, sc=8)
00:36:34.109  starting I/O failed
00:36:34.109  Write completed with error (sct=0, sc=8)
00:36:34.109  starting I/O failed
00:36:34.109  Write completed with error (sct=0, sc=8)
00:36:34.109  starting I/O failed
00:36:34.109  Write completed with error (sct=0, sc=8)
00:36:34.109  starting I/O failed
00:36:34.109  Write completed with error (sct=0, sc=8)
00:36:34.109  starting I/O failed
00:36:34.109  Write completed with error (sct=0, sc=8)
00:36:34.109  starting I/O failed
00:36:34.109  Write completed with error (sct=0, sc=8)
00:36:34.109  starting I/O failed
00:36:34.109  Write completed with error (sct=0, sc=8)
00:36:34.109  starting I/O failed
00:36:34.109  Read completed with error (sct=0, sc=8)
00:36:34.109  starting I/O failed
00:36:34.109  Write completed with error (sct=0, sc=8)
00:36:34.109  starting I/O failed
00:36:34.109  Read completed with error (sct=0, sc=8)
00:36:34.109  starting I/O failed
00:36:34.109  Read completed with error (sct=0, sc=8)
00:36:34.109  starting I/O failed
00:36:34.109  Write completed with error (sct=0, sc=8)
00:36:34.109  starting I/O failed
00:36:34.109  Read completed with error (sct=0, sc=8)
00:36:34.109  starting I/O failed
00:36:34.109  Read completed with error (sct=0, sc=8)
00:36:34.109  starting I/O failed
00:36:34.109  Write completed with error (sct=0, sc=8)
00:36:34.109  starting I/O failed
00:36:34.109  Read completed with error (sct=0, sc=8)
00:36:34.109  starting I/O failed
00:36:34.109  Write completed with error (sct=0, sc=8)
00:36:34.109  starting I/O failed
00:36:34.109  Write completed with error (sct=0, sc=8)
00:36:34.109  starting I/O failed
00:36:34.109  Write completed with error (sct=0, sc=8)
00:36:34.109  starting I/O failed
00:36:34.109  Read completed with error (sct=0, sc=8)
00:36:34.109  starting I/O failed
00:36:34.109  Write completed with error (sct=0, sc=8)
00:36:34.109  starting I/O failed
00:36:34.109  [2024-12-14 14:02:33.801183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2
00:36:34.109  [2024-12-14 14:02:33.803207] nvme_rdma.c: 570:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8)
00:36:34.109  [2024-12-14 14:02:33.803238] nvme_rdma.c:1111:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74
00:36:34.109  [2024-12-14 14:02:33.803251] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cdb40
00:36:34.368  [2024-12-14 14:02:33.877634] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:36:34.368  [2024-12-14 14:02:33.877675] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:36:34.368  [2024-12-14 14:02:33.877688] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:36:34.368  [2024-12-14 14:02:33.877717] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:36:34.368  [2024-12-14 14:02:33.877728] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:36:34.368  [2024-12-14 14:02:33.880461] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5
00:36:34.368  [2024-12-14 14:02:33.880553] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6
00:36:34.368  [2024-12-14 14:02:33.880619] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4
00:36:34.368  [2024-12-14 14:02:33.880645] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7
00:36:34.935   14:02:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:36:34.935   14:02:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@868 -- # return 0
00:36:34.935   14:02:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:36:34.935   14:02:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable
00:36:34.935   14:02:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x
00:36:34.935   14:02:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:36:34.935   14:02:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0
00:36:34.935   14:02:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:34.935   14:02:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x
00:36:34.935  Malloc0
00:36:34.935   14:02:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:34.935   14:02:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024
00:36:34.935   14:02:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:34.935   14:02:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x
00:36:34.935  [2024-12-14 14:02:34.581834] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028b40/0x7f48da30d940) succeed.
00:36:34.935  [2024-12-14 14:02:34.591726] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028cc0/0x7f48da1bd940) succeed.
00:36:35.194  [2024-12-14 14:02:34.807485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2
00:36:35.194  qpair failed and we were unable to recover it.
00:36:35.194  [2024-12-14 14:02:34.809338] nvme_rdma.c: 570:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8)
00:36:35.194  [2024-12-14 14:02:34.809370] nvme_rdma.c:1111:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74
00:36:35.194  [2024-12-14 14:02:34.809383] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cdb40
00:36:35.194   14:02:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:35.194   14:02:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:36:35.194   14:02:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:35.194   14:02:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x
00:36:35.194   14:02:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:35.194   14:02:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:36:35.194   14:02:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:35.194   14:02:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x
00:36:35.194   14:02:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:35.194   14:02:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.9 -s 4420
00:36:35.194   14:02:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:35.194   14:02:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x
00:36:35.194  [2024-12-14 14:02:34.876355] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 ***
00:36:35.194   14:02:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:35.194   14:02:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.9 -s 4420
00:36:35.194   14:02:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:35.194   14:02:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x
00:36:35.194   14:02:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:35.194   14:02:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@65 -- # wait 3548496
00:36:36.129  [2024-12-14 14:02:35.813527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2
00:36:36.129  qpair failed and we were unable to recover it.
00:36:36.129  [2024-12-14 14:02:35.815234] nvme_rdma.c: 570:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8)
00:36:36.129  [2024-12-14 14:02:35.815264] nvme_rdma.c:1111:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74
00:36:36.129  [2024-12-14 14:02:35.815278] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cdb40
00:36:37.506  [2024-12-14 14:02:36.819441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2
00:36:37.506  qpair failed and we were unable to recover it.
00:36:37.506  [2024-12-14 14:02:36.821299] nvme_rdma.c: 570:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8)
00:36:37.506  [2024-12-14 14:02:36.821334] nvme_rdma.c:1111:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74
00:36:37.506  [2024-12-14 14:02:36.821346] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cdb40
00:36:38.443  [2024-12-14 14:02:37.825372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2
00:36:38.443  qpair failed and we were unable to recover it.
00:36:38.443  [2024-12-14 14:02:37.827132] nvme_rdma.c: 570:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8)
00:36:38.443  [2024-12-14 14:02:37.827162] nvme_rdma.c:1111:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74
00:36:38.443  [2024-12-14 14:02:37.827174] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cdb40
00:36:39.379  [2024-12-14 14:02:38.831227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2
00:36:39.379  qpair failed and we were unable to recover it.
00:36:39.379  [2024-12-14 14:02:38.832958] nvme_rdma.c: 570:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8)
00:36:39.379  [2024-12-14 14:02:38.832988] nvme_rdma.c:1111:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74
00:36:39.379  [2024-12-14 14:02:38.833000] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cdb40
00:36:40.315  [2024-12-14 14:02:39.837411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2
00:36:40.315  qpair failed and we were unable to recover it.
00:36:40.315  [2024-12-14 14:02:39.839082] nvme_rdma.c: 570:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8)
00:36:40.315  [2024-12-14 14:02:39.839111] nvme_rdma.c:1111:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74
00:36:40.315  [2024-12-14 14:02:39.839124] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cdb40
00:36:41.251  [2024-12-14 14:02:40.843055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2
00:36:41.251  qpair failed and we were unable to recover it.
00:36:42.188  Write completed with error (sct=0, sc=8)
00:36:42.188  starting I/O failed
00:36:42.188  Read completed with error (sct=0, sc=8)
00:36:42.188  starting I/O failed
00:36:42.188  Write completed with error (sct=0, sc=8)
00:36:42.188  starting I/O failed
00:36:42.188  Read completed with error (sct=0, sc=8)
00:36:42.188  starting I/O failed
00:36:42.188  Write completed with error (sct=0, sc=8)
00:36:42.188  starting I/O failed
00:36:42.188  Write completed with error (sct=0, sc=8)
00:36:42.188  starting I/O failed
00:36:42.188  Write completed with error (sct=0, sc=8)
00:36:42.188  starting I/O failed
00:36:42.188  Write completed with error (sct=0, sc=8)
00:36:42.188  starting I/O failed
00:36:42.188  Write completed with error (sct=0, sc=8)
00:36:42.188  starting I/O failed
00:36:42.188  Read completed with error (sct=0, sc=8)
00:36:42.188  starting I/O failed
00:36:42.188  Read completed with error (sct=0, sc=8)
00:36:42.188  starting I/O failed
00:36:42.188  Write completed with error (sct=0, sc=8)
00:36:42.188  starting I/O failed
00:36:42.188  Read completed with error (sct=0, sc=8)
00:36:42.188  starting I/O failed
00:36:42.188  Write completed with error (sct=0, sc=8)
00:36:42.188  starting I/O failed
00:36:42.188  Write completed with error (sct=0, sc=8)
00:36:42.188  starting I/O failed
00:36:42.188  Write completed with error (sct=0, sc=8)
00:36:42.188  starting I/O failed
00:36:42.188  Write completed with error (sct=0, sc=8)
00:36:42.188  starting I/O failed
00:36:42.188  Read completed with error (sct=0, sc=8)
00:36:42.188  starting I/O failed
00:36:42.188  Write completed with error (sct=0, sc=8)
00:36:42.188  starting I/O failed
00:36:42.188  Read completed with error (sct=0, sc=8)
00:36:42.188  starting I/O failed
00:36:42.188  Write completed with error (sct=0, sc=8)
00:36:42.188  starting I/O failed
00:36:42.188  Write completed with error (sct=0, sc=8)
00:36:42.188  starting I/O failed
00:36:42.188  Read completed with error (sct=0, sc=8)
00:36:42.188  starting I/O failed
00:36:42.188  Read completed with error (sct=0, sc=8)
00:36:42.188  starting I/O failed
00:36:42.188  Read completed with error (sct=0, sc=8)
00:36:42.188  starting I/O failed
00:36:42.188  Write completed with error (sct=0, sc=8)
00:36:42.188  starting I/O failed
00:36:42.188  Read completed with error (sct=0, sc=8)
00:36:42.188  starting I/O failed
00:36:42.188  Write completed with error (sct=0, sc=8)
00:36:42.188  starting I/O failed
00:36:42.188  Write completed with error (sct=0, sc=8)
00:36:42.188  starting I/O failed
00:36:42.188  Read completed with error (sct=0, sc=8)
00:36:42.188  starting I/O failed
00:36:42.188  Read completed with error (sct=0, sc=8)
00:36:42.188  starting I/O failed
00:36:42.188  Read completed with error (sct=0, sc=8)
00:36:42.188  starting I/O failed
00:36:42.188  [2024-12-14 14:02:41.849175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3
00:36:43.127  Write completed with error (sct=0, sc=8)
00:36:43.127  starting I/O failed
00:36:43.127  Write completed with error (sct=0, sc=8)
00:36:43.127  starting I/O failed
00:36:43.127  Read completed with error (sct=0, sc=8)
00:36:43.127  starting I/O failed
00:36:43.127  Write completed with error (sct=0, sc=8)
00:36:43.127  starting I/O failed
00:36:43.127  Write completed with error (sct=0, sc=8)
00:36:43.127  starting I/O failed
00:36:43.127  Read completed with error (sct=0, sc=8)
00:36:43.127  starting I/O failed
00:36:43.127  Read completed with error (sct=0, sc=8)
00:36:43.127  starting I/O failed
00:36:43.127  Read completed with error (sct=0, sc=8)
00:36:43.127  starting I/O failed
00:36:43.127  Read completed with error (sct=0, sc=8)
00:36:43.127  starting I/O failed
00:36:43.127  Read completed with error (sct=0, sc=8)
00:36:43.127  starting I/O failed
00:36:43.127  Write completed with error (sct=0, sc=8)
00:36:43.127  starting I/O failed
00:36:43.127  Write completed with error (sct=0, sc=8)
00:36:43.127  starting I/O failed
00:36:43.127  Write completed with error (sct=0, sc=8)
00:36:43.127  starting I/O failed
00:36:43.127  Read completed with error (sct=0, sc=8)
00:36:43.127  starting I/O failed
00:36:43.127  Read completed with error (sct=0, sc=8)
00:36:43.127  starting I/O failed
00:36:43.127  Write completed with error (sct=0, sc=8)
00:36:43.127  starting I/O failed
00:36:43.127  Read completed with error (sct=0, sc=8)
00:36:43.127  starting I/O failed
00:36:43.127  Write completed with error (sct=0, sc=8)
00:36:43.127  starting I/O failed
00:36:43.127  Write completed with error (sct=0, sc=8)
00:36:43.127  starting I/O failed
00:36:43.127  Read completed with error (sct=0, sc=8)
00:36:43.128  starting I/O failed
00:36:43.128  Write completed with error (sct=0, sc=8)
00:36:43.128  starting I/O failed
00:36:43.128  Write completed with error (sct=0, sc=8)
00:36:43.128  starting I/O failed
00:36:43.128  Read completed with error (sct=0, sc=8)
00:36:43.128  starting I/O failed
00:36:43.128  Write completed with error (sct=0, sc=8)
00:36:43.128  starting I/O failed
00:36:43.128  Read completed with error (sct=0, sc=8)
00:36:43.128  starting I/O failed
00:36:43.128  Read completed with error (sct=0, sc=8)
00:36:43.128  starting I/O failed
00:36:43.128  Read completed with error (sct=0, sc=8)
00:36:43.128  starting I/O failed
00:36:43.128  Write completed with error (sct=0, sc=8)
00:36:43.128  starting I/O failed
00:36:43.128  Read completed with error (sct=0, sc=8)
00:36:43.128  starting I/O failed
00:36:43.128  Read completed with error (sct=0, sc=8)
00:36:43.128  starting I/O failed
00:36:43.128  Write completed with error (sct=0, sc=8)
00:36:43.128  starting I/O failed
00:36:43.128  Write completed with error (sct=0, sc=8)
00:36:43.128  starting I/O failed
00:36:43.128  [2024-12-14 14:02:42.854951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 4
00:36:43.128  [2024-12-14 14:02:42.855010] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Submitting Keep Alive failed
00:36:43.128  A controller has encountered a failure and is being reset.
00:36:43.128  Resorting to new failover address 192.168.100.9
00:36:43.128  [2024-12-14 14:02:42.855142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:36:43.128  [2024-12-14 14:02:42.855275] nvme_rdma.c: 570:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0)
00:36:43.386  [2024-12-14 14:02:42.899178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0
00:36:43.386  Controller properly reset.
00:36:43.386  Initializing NVMe Controllers
00:36:43.386  Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1
00:36:43.386  Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1
00:36:43.386  Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0
00:36:43.386  Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1
00:36:43.386  Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2
00:36:43.386  Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3
00:36:43.386  Initialization complete. Launching workers.
00:36:43.386  Starting thread on core 1
00:36:43.386  Starting thread on core 2
00:36:43.386  Starting thread on core 3
00:36:43.386  Starting thread on core 0
00:36:43.645   14:02:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@66 -- # sync
00:36:43.645  
00:36:43.645  real	0m13.649s
00:36:43.645  user	0m49.691s
00:36:43.645  sys	0m3.665s
00:36:43.645   14:02:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable
00:36:43.645   14:02:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x
00:36:43.645  ************************************
00:36:43.645  END TEST nvmf_target_disconnect_tc3
00:36:43.645  ************************************
00:36:43.645   14:02:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT
00:36:43.645   14:02:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini
00:36:43.645   14:02:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup
00:36:43.645   14:02:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync
00:36:43.645   14:02:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' rdma == tcp ']'
00:36:43.645   14:02:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' rdma == rdma ']'
00:36:43.645   14:02:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e
00:36:43.645   14:02:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20}
00:36:43.645   14:02:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma
00:36:43.645  rmmod nvme_rdma
00:36:43.645  rmmod nvme_fabrics
00:36:43.645   14:02:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:36:43.645   14:02:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e
00:36:43.645   14:02:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0
00:36:43.645   14:02:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 3549225 ']'
00:36:43.645   14:02:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 3549225
00:36:43.645   14:02:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3549225 ']'
00:36:43.645   14:02:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 3549225
00:36:43.645    14:02:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname
00:36:43.645   14:02:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:36:43.645    14:02:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3549225
00:36:43.645   14:02:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4
00:36:43.645   14:02:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']'
00:36:43.645   14:02:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3549225'
00:36:43.646  killing process with pid 3549225
00:36:43.646   14:02:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 3549225
00:36:43.646   14:02:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 3549225
00:36:45.549   14:02:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:36:45.549   14:02:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]]
00:36:45.549  
00:36:45.549  real	0m36.349s
00:36:45.549  user	2m2.336s
00:36:45.549  sys	0m12.699s
00:36:45.549   14:02:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable
00:36:45.549   14:02:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x
00:36:45.549  ************************************
00:36:45.549  END TEST nvmf_target_disconnect
00:36:45.549  ************************************
00:36:45.549   14:02:45 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT
00:36:45.549  
00:36:45.549  real	7m56.018s
00:36:45.549  user	22m24.918s
00:36:45.549  sys	1m49.185s
00:36:45.549   14:02:45 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable
00:36:45.549   14:02:45 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:36:45.549  ************************************
00:36:45.549  END TEST nvmf_host
00:36:45.549  ************************************
00:36:45.549   14:02:45 nvmf_rdma -- nvmf/nvmf.sh@19 -- # [[ rdma = \t\c\p ]]
00:36:45.549  
00:36:45.549  real	29m37.199s
00:36:45.549  user	86m29.387s
00:36:45.549  sys	6m52.254s
00:36:45.549   14:02:45 nvmf_rdma -- common/autotest_common.sh@1130 -- # xtrace_disable
00:36:45.549   14:02:45 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x
00:36:45.549  ************************************
00:36:45.549  END TEST nvmf_rdma
00:36:45.549  ************************************
00:36:45.808   14:02:45  -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma
00:36:45.808   14:02:45  -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:36:45.808   14:02:45  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:36:45.808   14:02:45  -- common/autotest_common.sh@10 -- # set +x
00:36:45.808  ************************************
00:36:45.808  START TEST spdkcli_nvmf_rdma
00:36:45.808  ************************************
00:36:45.808   14:02:45 spdkcli_nvmf_rdma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma
00:36:45.808  * Looking for test storage...
00:36:45.808  * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli
00:36:45.808    14:02:45 spdkcli_nvmf_rdma -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:36:45.808     14:02:45 spdkcli_nvmf_rdma -- common/autotest_common.sh@1711 -- # lcov --version
00:36:45.808     14:02:45 spdkcli_nvmf_rdma -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:36:45.808    14:02:45 spdkcli_nvmf_rdma -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:36:45.808    14:02:45 spdkcli_nvmf_rdma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:36:45.808    14:02:45 spdkcli_nvmf_rdma -- scripts/common.sh@333 -- # local ver1 ver1_l
00:36:45.808    14:02:45 spdkcli_nvmf_rdma -- scripts/common.sh@334 -- # local ver2 ver2_l
00:36:45.808    14:02:45 spdkcli_nvmf_rdma -- scripts/common.sh@336 -- # IFS=.-:
00:36:45.808    14:02:45 spdkcli_nvmf_rdma -- scripts/common.sh@336 -- # read -ra ver1
00:36:45.808    14:02:45 spdkcli_nvmf_rdma -- scripts/common.sh@337 -- # IFS=.-:
00:36:45.808    14:02:45 spdkcli_nvmf_rdma -- scripts/common.sh@337 -- # read -ra ver2
00:36:45.808    14:02:45 spdkcli_nvmf_rdma -- scripts/common.sh@338 -- # local 'op=<'
00:36:45.808    14:02:45 spdkcli_nvmf_rdma -- scripts/common.sh@340 -- # ver1_l=2
00:36:45.808    14:02:45 spdkcli_nvmf_rdma -- scripts/common.sh@341 -- # ver2_l=1
00:36:45.808    14:02:45 spdkcli_nvmf_rdma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:36:45.808    14:02:45 spdkcli_nvmf_rdma -- scripts/common.sh@344 -- # case "$op" in
00:36:45.808    14:02:45 spdkcli_nvmf_rdma -- scripts/common.sh@345 -- # : 1
00:36:45.808    14:02:45 spdkcli_nvmf_rdma -- scripts/common.sh@364 -- # (( v = 0 ))
00:36:45.808    14:02:45 spdkcli_nvmf_rdma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:36:45.808     14:02:45 spdkcli_nvmf_rdma -- scripts/common.sh@365 -- # decimal 1
00:36:45.808     14:02:45 spdkcli_nvmf_rdma -- scripts/common.sh@353 -- # local d=1
00:36:45.808     14:02:45 spdkcli_nvmf_rdma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:36:45.808     14:02:45 spdkcli_nvmf_rdma -- scripts/common.sh@355 -- # echo 1
00:36:45.808    14:02:45 spdkcli_nvmf_rdma -- scripts/common.sh@365 -- # ver1[v]=1
00:36:45.808     14:02:45 spdkcli_nvmf_rdma -- scripts/common.sh@366 -- # decimal 2
00:36:45.808     14:02:45 spdkcli_nvmf_rdma -- scripts/common.sh@353 -- # local d=2
00:36:45.808     14:02:45 spdkcli_nvmf_rdma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:36:45.808     14:02:45 spdkcli_nvmf_rdma -- scripts/common.sh@355 -- # echo 2
00:36:45.808    14:02:45 spdkcli_nvmf_rdma -- scripts/common.sh@366 -- # ver2[v]=2
00:36:45.808    14:02:45 spdkcli_nvmf_rdma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:36:45.808    14:02:45 spdkcli_nvmf_rdma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:36:45.808    14:02:45 spdkcli_nvmf_rdma -- scripts/common.sh@368 -- # return 0
00:36:45.808    14:02:45 spdkcli_nvmf_rdma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:36:45.808    14:02:45 spdkcli_nvmf_rdma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:36:45.808  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:36:45.808  		--rc genhtml_branch_coverage=1
00:36:45.808  		--rc genhtml_function_coverage=1
00:36:45.808  		--rc genhtml_legend=1
00:36:45.808  		--rc geninfo_all_blocks=1
00:36:45.808  		--rc geninfo_unexecuted_blocks=1
00:36:45.808  		
00:36:45.808  		'
00:36:45.808    14:02:45 spdkcli_nvmf_rdma -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:36:45.808  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:36:45.808  		--rc genhtml_branch_coverage=1
00:36:45.808  		--rc genhtml_function_coverage=1
00:36:45.808  		--rc genhtml_legend=1
00:36:45.808  		--rc geninfo_all_blocks=1
00:36:45.808  		--rc geninfo_unexecuted_blocks=1
00:36:45.808  		
00:36:45.808  		'
00:36:45.808    14:02:45 spdkcli_nvmf_rdma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:36:45.808  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:36:45.808  		--rc genhtml_branch_coverage=1
00:36:45.808  		--rc genhtml_function_coverage=1
00:36:45.808  		--rc genhtml_legend=1
00:36:45.808  		--rc geninfo_all_blocks=1
00:36:45.808  		--rc geninfo_unexecuted_blocks=1
00:36:45.808  		
00:36:45.808  		'
00:36:45.808    14:02:45 spdkcli_nvmf_rdma -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:36:45.808  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:36:45.808  		--rc genhtml_branch_coverage=1
00:36:45.808  		--rc genhtml_function_coverage=1
00:36:45.808  		--rc genhtml_legend=1
00:36:45.808  		--rc geninfo_all_blocks=1
00:36:45.808  		--rc geninfo_unexecuted_blocks=1
00:36:45.808  		
00:36:45.808  		'
00:36:45.808   14:02:45 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh
00:36:45.808    14:02:45 spdkcli_nvmf_rdma -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py
00:36:45.808    14:02:45 spdkcli_nvmf_rdma -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py
00:36:45.808   14:02:45 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh
00:36:45.808     14:02:45 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # uname -s
00:36:45.808    14:02:45 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:36:45.808    14:02:45 spdkcli_nvmf_rdma -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:36:45.808    14:02:45 spdkcli_nvmf_rdma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:36:45.808    14:02:45 spdkcli_nvmf_rdma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:36:45.808    14:02:45 spdkcli_nvmf_rdma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:36:45.808    14:02:45 spdkcli_nvmf_rdma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:36:45.808    14:02:45 spdkcli_nvmf_rdma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:36:45.808    14:02:45 spdkcli_nvmf_rdma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:36:45.808    14:02:45 spdkcli_nvmf_rdma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:36:45.808     14:02:45 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:36:45.808    14:02:45 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e
00:36:45.808    14:02:45 spdkcli_nvmf_rdma -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e
00:36:45.808    14:02:45 spdkcli_nvmf_rdma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:36:45.808    14:02:45 spdkcli_nvmf_rdma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:36:45.808    14:02:45 spdkcli_nvmf_rdma -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:36:45.808    14:02:45 spdkcli_nvmf_rdma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:36:45.808    14:02:45 spdkcli_nvmf_rdma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh
00:36:45.808     14:02:45 spdkcli_nvmf_rdma -- scripts/common.sh@15 -- # shopt -s extglob
00:36:45.808     14:02:45 spdkcli_nvmf_rdma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:36:45.808     14:02:45 spdkcli_nvmf_rdma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:36:45.808     14:02:45 spdkcli_nvmf_rdma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:36:45.809      14:02:45 spdkcli_nvmf_rdma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:36:45.809      14:02:45 spdkcli_nvmf_rdma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:36:45.809      14:02:45 spdkcli_nvmf_rdma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:36:45.809      14:02:45 spdkcli_nvmf_rdma -- paths/export.sh@5 -- # export PATH
00:36:45.809      14:02:45 spdkcli_nvmf_rdma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:36:45.809    14:02:45 spdkcli_nvmf_rdma -- nvmf/common.sh@51 -- # : 0
00:36:45.809    14:02:45 spdkcli_nvmf_rdma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:36:45.809    14:02:45 spdkcli_nvmf_rdma -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:36:45.809    14:02:45 spdkcli_nvmf_rdma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:36:45.809    14:02:45 spdkcli_nvmf_rdma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:36:45.809    14:02:45 spdkcli_nvmf_rdma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:36:45.809    14:02:45 spdkcli_nvmf_rdma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:36:45.809  /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:36:45.809    14:02:45 spdkcli_nvmf_rdma -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:36:45.809    14:02:45 spdkcli_nvmf_rdma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:36:45.809    14:02:45 spdkcli_nvmf_rdma -- nvmf/common.sh@55 -- # have_pci_nics=0
00:36:45.809   14:02:45 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test
00:36:45.809   14:02:45 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf
00:36:46.067   14:02:45 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT
00:36:46.067   14:02:45 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt
00:36:46.067   14:02:45 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable
00:36:46.067   14:02:45 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x
00:36:46.067   14:02:45 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt
00:36:46.067   14:02:45 spdkcli_nvmf_rdma -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3551220
00:36:46.067   14:02:45 spdkcli_nvmf_rdma -- spdkcli/common.sh@34 -- # waitforlisten 3551220
00:36:46.067   14:02:45 spdkcli_nvmf_rdma -- common/autotest_common.sh@835 -- # '[' -z 3551220 ']'
00:36:46.067   14:02:45 spdkcli_nvmf_rdma -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:36:46.067   14:02:45 spdkcli_nvmf_rdma -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0
00:36:46.067   14:02:45 spdkcli_nvmf_rdma -- common/autotest_common.sh@840 -- # local max_retries=100
00:36:46.067   14:02:45 spdkcli_nvmf_rdma -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:36:46.067  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:36:46.067   14:02:45 spdkcli_nvmf_rdma -- common/autotest_common.sh@844 -- # xtrace_disable
00:36:46.067   14:02:45 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x
00:36:46.067  [2024-12-14 14:02:45.638437] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization...
00:36:46.067  [2024-12-14 14:02:45.638532] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3551220 ]
00:36:46.067  [2024-12-14 14:02:45.772707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:36:46.326  [2024-12-14 14:02:45.876049] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:36:46.326  [2024-12-14 14:02:45.876055] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:36:46.891   14:02:46 spdkcli_nvmf_rdma -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:36:46.891   14:02:46 spdkcli_nvmf_rdma -- common/autotest_common.sh@868 -- # return 0
00:36:46.891   14:02:46 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt
00:36:46.891   14:02:46 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable
00:36:46.891   14:02:46 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x
00:36:46.891   14:02:46 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1
00:36:46.891   14:02:46 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]]
00:36:46.891   14:02:46 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@23 -- # nvmftestinit
00:36:46.891   14:02:46 spdkcli_nvmf_rdma -- nvmf/common.sh@469 -- # '[' -z rdma ']'
00:36:46.891   14:02:46 spdkcli_nvmf_rdma -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:36:46.891   14:02:46 spdkcli_nvmf_rdma -- nvmf/common.sh@476 -- # prepare_net_devs
00:36:46.891   14:02:46 spdkcli_nvmf_rdma -- nvmf/common.sh@438 -- # local -g is_hw=no
00:36:46.891   14:02:46 spdkcli_nvmf_rdma -- nvmf/common.sh@440 -- # remove_spdk_ns
00:36:46.891   14:02:46 spdkcli_nvmf_rdma -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:36:46.891   14:02:46 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null'
00:36:46.891    14:02:46 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:36:46.891   14:02:46 spdkcli_nvmf_rdma -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:36:46.891   14:02:46 spdkcli_nvmf_rdma -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:36:46.891   14:02:46 spdkcli_nvmf_rdma -- nvmf/common.sh@309 -- # xtrace_disable
00:36:46.891   14:02:46 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x
00:36:54.997   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:36:54.997   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # pci_devs=()
00:36:54.997   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # local -a pci_devs
00:36:54.997   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@316 -- # pci_net_devs=()
00:36:54.997   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:36:54.997   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # pci_drivers=()
00:36:54.997   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # local -A pci_drivers
00:36:54.997   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@319 -- # net_devs=()
00:36:54.997   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@319 -- # local -ga net_devs
00:36:54.997   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # e810=()
00:36:54.997   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # local -ga e810
00:36:54.997   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # x722=()
00:36:54.997   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # local -ga x722
00:36:54.997   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # mlx=()
00:36:54.997   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # local -ga mlx
00:36:54.997   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:36:54.997   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:36:54.997   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:36:54.997   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:36:54.997   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:36:54.997   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:36:54.997   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:36:54.997   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:36:54.997   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:36:54.997   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:36:54.997   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:36:54.997   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:36:54.997   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:36:54.997   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@347 -- # [[ rdma == rdma ]]
00:36:54.997   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}")
00:36:54.997   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}")
00:36:54.997   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]]
00:36:54.997   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}")
00:36:54.997   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:36:54.997   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:36:54.997   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)'
00:36:54.997  Found 0000:d9:00.0 (0x15b3 - 0x1015)
00:36:54.997   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:36:54.997   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:36:54.997   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:36:54.997   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:36:54.997   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:36:54.997   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:36:54.997   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:36:54.997   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)'
00:36:54.997  Found 0000:d9:00.1 (0x15b3 - 0x1015)
00:36:54.997   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]]
00:36:54.997   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]]
00:36:54.997   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]]
00:36:54.997   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]]
00:36:54.997   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]]
00:36:54.997   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15'
00:36:54.997   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:36:54.997   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]]
00:36:54.997   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:36:54.997   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:36:54.997   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:36:54.997   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:36:54.997   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:36:54.997   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0'
00:36:54.997  Found net devices under 0000:d9:00.0: mlx_0_0
00:36:54.997   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:36:54.997   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:36:54.997   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:36:54.997   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]]
00:36:54.997   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:36:54.998   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:36:54.998   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1'
00:36:54.998  Found net devices under 0000:d9:00.1: mlx_0_1
00:36:54.998   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:36:54.998   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:36:54.998   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@442 -- # is_hw=yes
00:36:54.998   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:36:54.998   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@445 -- # [[ rdma == tcp ]]
00:36:54.998   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@447 -- # [[ rdma == rdma ]]
00:36:54.998   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@448 -- # rdma_device_init
00:36:54.998   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@529 -- # load_ib_rdma_modules
00:36:54.998    14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # uname
00:36:54.998   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']'
00:36:54.998   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@66 -- # modprobe ib_cm
00:36:54.998   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@67 -- # modprobe ib_core
00:36:54.998   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@68 -- # modprobe ib_umad
00:36:54.998   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@69 -- # modprobe ib_uverbs
00:36:54.998   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@70 -- # modprobe iw_cm
00:36:54.998   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@71 -- # modprobe rdma_cm
00:36:54.998   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@72 -- # modprobe rdma_ucm
00:36:54.998   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@530 -- # allocate_nic_ips
00:36:54.998   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR ))
00:36:54.998    14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # get_rdma_if_list
00:36:54.998    14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:36:54.998    14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:36:54.998     14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:36:54.998     14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:36:54.998    14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:36:54.998    14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:36:54.998    14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:36:54.998    14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:36:54.998    14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_0
00:36:54.998    14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2
00:36:54.998    14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:36:54.998    14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:36:54.998    14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:36:54.998    14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:36:54.998    14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:36:54.998    14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_1
00:36:54.998    14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2
00:36:54.998   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:36:54.998    14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0
00:36:54.998    14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:36:54.998    14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:36:54.998    14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}'
00:36:54.998    14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1
00:36:54.998   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # ip=192.168.100.8
00:36:54.998   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]]
00:36:54.998   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@85 -- # ip addr show mlx_0_0
00:36:54.998  6: mlx_0_0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:36:54.998      link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff
00:36:54.998      altname enp217s0f0np0
00:36:54.998      altname ens818f0np0
00:36:54.998      inet 192.168.100.8/24 scope global mlx_0_0
00:36:54.998         valid_lft forever preferred_lft forever
00:36:54.998   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list)
00:36:54.998    14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1
00:36:54.998    14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:36:54.998    14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:36:54.998    14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}'
00:36:54.998    14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1
00:36:54.998   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # ip=192.168.100.9
00:36:54.998   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]]
00:36:54.998   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@85 -- # ip addr show mlx_0_1
00:36:54.998  7: mlx_0_1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
00:36:54.998      link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff
00:36:54.998      altname enp217s0f1np1
00:36:54.998      altname ens818f1np1
00:36:54.998      inet 192.168.100.9/24 scope global mlx_0_1
00:36:54.998         valid_lft forever preferred_lft forever
00:36:54.998   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@450 -- # return 0
00:36:54.998   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:36:54.998   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma'
00:36:54.998   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]]
00:36:54.998    14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@484 -- # get_available_rdma_ips
00:36:54.998     14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # get_rdma_if_list
00:36:54.998     14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs
00:36:54.998     14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs
00:36:54.998      14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net
00:36:54.998      14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net
00:36:54.998     14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@100 -- # (( 2 == 0 ))
00:36:54.998     14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:36:54.998     14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:36:54.998     14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]]
00:36:54.998     14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_0
00:36:54.998     14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2
00:36:54.998     14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}"
00:36:54.998     14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:36:54.998     14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]]
00:36:54.998     14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}"
00:36:54.998     14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]]
00:36:54.998     14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_1
00:36:54.998     14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2
00:36:54.998    14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:36:54.998    14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0
00:36:54.998    14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_0
00:36:54.998    14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0
00:36:54.998    14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}'
00:36:54.998    14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1
00:36:54.998    14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list)
00:36:54.998    14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1
00:36:54.998    14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_1
00:36:54.998    14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1
00:36:54.998    14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}'
00:36:54.998    14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1
00:36:54.998   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8
00:36:54.998  192.168.100.9'
00:36:54.998    14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@485 -- # echo '192.168.100.8
00:36:54.998  192.168.100.9'
00:36:54.998    14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@485 -- # head -n 1
00:36:54.998   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8
00:36:54.998    14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # echo '192.168.100.8
00:36:54.998  192.168.100.9'
00:36:54.998    14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # tail -n +2
00:36:54.998    14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # head -n 1
00:36:54.998   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9
00:36:54.998   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']'
00:36:54.998   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024'
00:36:54.998   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@496 -- # '[' rdma == tcp ']'
00:36:54.998   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@496 -- # '[' rdma == rdma ']'
00:36:54.998   14:02:53 spdkcli_nvmf_rdma -- nvmf/common.sh@502 -- # modprobe nvme-rdma
00:36:54.998   14:02:53 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8
00:36:54.998   14:02:53 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config
00:36:54.998   14:02:53 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable
00:36:54.998   14:02:53 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x
00:36:54.998   14:02:53 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True
00:36:54.998  '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True
00:36:54.998  '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True
00:36:54.998  '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True
00:36:54.998  '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True
00:36:54.998  '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True
00:36:54.998  '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True
00:36:54.998  '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW  max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True
00:36:54.998  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True
00:36:54.998  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True
00:36:54.998  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create  rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True
00:36:54.998  '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True
00:36:54.998  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True
00:36:54.998  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create  rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True
00:36:54.998  '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True
00:36:54.999  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True
00:36:54.999  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create  rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True
00:36:54.999  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create  rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True
00:36:54.999  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create  nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True
00:36:54.999  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create  nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True
00:36:54.999  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\''
00:36:54.999  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True
00:36:54.999  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True
00:36:54.999  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True
00:36:54.999  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True
00:36:54.999  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True
00:36:54.999  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True
00:36:54.999  '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\''
00:36:54.999  '
00:36:56.372  [2024-12-14 14:02:56.089757] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x61200002a940/0x7f69d2d48940) succeed.
00:36:56.372  [2024-12-14 14:02:56.099619] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x61200002aac0/0x7f69d20a6940) succeed.
00:36:57.745  [2024-12-14 14:02:57.443647] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 ***
00:37:00.274  [2024-12-14 14:02:59.690856] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 ***
00:37:02.208  [2024-12-14 14:03:01.621382] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 ***
00:37:03.615  Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True]
00:37:03.615  Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True]
00:37:03.615  Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True]
00:37:03.615  Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True]
00:37:03.615  Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True]
00:37:03.615  Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True]
00:37:03.615  Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True]
00:37:03.616  Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW  max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True]
00:37:03.616  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True]
00:37:03.616  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True]
00:37:03.616  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create  rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True]
00:37:03.616  Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True]
00:37:03.616  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True]
00:37:03.616  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create  rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True]
00:37:03.616  Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True]
00:37:03.616  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True]
00:37:03.616  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create  rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True]
00:37:03.616  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create  rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True]
00:37:03.616  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create  nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True]
00:37:03.616  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create  nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True]
00:37:03.616  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False]
00:37:03.616  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True]
00:37:03.616  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True]
00:37:03.616  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True]
00:37:03.616  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True]
00:37:03.616  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True]
00:37:03.616  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True]
00:37:03.616  Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False]
00:37:03.616   14:03:03 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config
00:37:03.616   14:03:03 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable
00:37:03.616   14:03:03 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x
00:37:03.616   14:03:03 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match
00:37:03.616   14:03:03 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable
00:37:03.616   14:03:03 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x
00:37:03.616   14:03:03 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@69 -- # check_match
00:37:03.616   14:03:03 spdkcli_nvmf_rdma -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf
00:37:04.182   14:03:03 spdkcli_nvmf_rdma -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match
00:37:04.182   14:03:03 spdkcli_nvmf_rdma -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test
00:37:04.182   14:03:03 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match
00:37:04.182   14:03:03 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable
00:37:04.182   14:03:03 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x
00:37:04.182   14:03:03 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config
00:37:04.182   14:03:03 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable
00:37:04.182   14:03:03 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x
00:37:04.182   14:03:03 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\''
00:37:04.182  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\''
00:37:04.182  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\''
00:37:04.182  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\''
00:37:04.182  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\''
00:37:04.182  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\''
00:37:04.182  '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\''
00:37:04.182  '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\''
00:37:04.182  '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\''
00:37:04.182  '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\''
00:37:04.182  '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\''
00:37:04.182  '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\''
00:37:04.182  '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\''
00:37:04.182  '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\''
00:37:04.182  '
00:37:09.448  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False]
00:37:09.448  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False]
00:37:09.448  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False]
00:37:09.448  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False]
00:37:09.448  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False]
00:37:09.448  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False]
00:37:09.448  Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False]
00:37:09.448  Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False]
00:37:09.448  Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False]
00:37:09.448  Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False]
00:37:09.448  Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False]
00:37:09.448  Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False]
00:37:09.448  Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False]
00:37:09.448  Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False]
00:37:09.711   14:03:09 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config
00:37:09.711   14:03:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable
00:37:09.711   14:03:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x
00:37:09.711   14:03:09 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@90 -- # killprocess 3551220
00:37:09.711   14:03:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@954 -- # '[' -z 3551220 ']'
00:37:09.711   14:03:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@958 -- # kill -0 3551220
00:37:09.711    14:03:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@959 -- # uname
00:37:09.711   14:03:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:37:09.711    14:03:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3551220
00:37:09.711   14:03:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:37:09.711   14:03:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:37:09.711   14:03:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3551220'
00:37:09.711  killing process with pid 3551220
00:37:09.711   14:03:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@973 -- # kill 3551220
00:37:09.711   14:03:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@978 -- # wait 3551220
00:37:11.615   14:03:10 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@1 -- # nvmftestfini
00:37:11.615   14:03:10 spdkcli_nvmf_rdma -- nvmf/common.sh@516 -- # nvmfcleanup
00:37:11.615   14:03:10 spdkcli_nvmf_rdma -- nvmf/common.sh@121 -- # sync
00:37:11.615   14:03:10 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # '[' rdma == tcp ']'
00:37:11.615   14:03:10 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # '[' rdma == rdma ']'
00:37:11.615   14:03:10 spdkcli_nvmf_rdma -- nvmf/common.sh@124 -- # set +e
00:37:11.615   14:03:10 spdkcli_nvmf_rdma -- nvmf/common.sh@125 -- # for i in {1..20}
00:37:11.615   14:03:10 spdkcli_nvmf_rdma -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma
00:37:11.615  rmmod nvme_rdma
00:37:11.615  rmmod nvme_fabrics
00:37:11.615   14:03:10 spdkcli_nvmf_rdma -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:37:11.615   14:03:10 spdkcli_nvmf_rdma -- nvmf/common.sh@128 -- # set -e
00:37:11.615   14:03:10 spdkcli_nvmf_rdma -- nvmf/common.sh@129 -- # return 0
00:37:11.615   14:03:10 spdkcli_nvmf_rdma -- nvmf/common.sh@517 -- # '[' -n '' ']'
00:37:11.615   14:03:10 spdkcli_nvmf_rdma -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:37:11.615   14:03:10 spdkcli_nvmf_rdma -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]]
00:37:11.615  
00:37:11.615  real	0m25.565s
00:37:11.615  user	0m53.562s
00:37:11.615  sys	0m6.294s
00:37:11.615   14:03:10 spdkcli_nvmf_rdma -- common/autotest_common.sh@1130 -- # xtrace_disable
00:37:11.615   14:03:10 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x
00:37:11.615  ************************************
00:37:11.615  END TEST spdkcli_nvmf_rdma
00:37:11.615  ************************************
00:37:11.615   14:03:10  -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']'
00:37:11.615   14:03:10  -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']'
00:37:11.615   14:03:10  -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']'
00:37:11.615   14:03:10  -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']'
00:37:11.615   14:03:10  -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']'
00:37:11.615   14:03:10  -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']'
00:37:11.615   14:03:10  -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']'
00:37:11.615   14:03:10  -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']'
00:37:11.615   14:03:10  -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']'
00:37:11.615   14:03:10  -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']'
00:37:11.615   14:03:10  -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']'
00:37:11.615   14:03:10  -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]]
00:37:11.615   14:03:10  -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]]
00:37:11.615   14:03:10  -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]]
00:37:11.615   14:03:10  -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]]
00:37:11.615   14:03:10  -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT
00:37:11.615   14:03:10  -- spdk/autotest.sh@387 -- # timing_enter post_cleanup
00:37:11.615   14:03:10  -- common/autotest_common.sh@726 -- # xtrace_disable
00:37:11.615   14:03:10  -- common/autotest_common.sh@10 -- # set +x
00:37:11.615   14:03:10  -- spdk/autotest.sh@388 -- # autotest_cleanup
00:37:11.615   14:03:10  -- common/autotest_common.sh@1396 -- # local autotest_es=0
00:37:11.615   14:03:10  -- common/autotest_common.sh@1397 -- # xtrace_disable
00:37:11.615   14:03:10  -- common/autotest_common.sh@10 -- # set +x
00:37:18.177  INFO: APP EXITING
00:37:18.177  INFO: killing all VMs
00:37:18.177  INFO: killing vhost app
00:37:18.177  INFO: EXIT DONE
00:37:20.077  Waiting for block devices as requested
00:37:20.077  0000:00:04.7 (8086 2021): vfio-pci -> ioatdma
00:37:20.077  0000:00:04.6 (8086 2021): vfio-pci -> ioatdma
00:37:20.077  0000:00:04.5 (8086 2021): vfio-pci -> ioatdma
00:37:20.077  0000:00:04.4 (8086 2021): vfio-pci -> ioatdma
00:37:20.335  0000:00:04.3 (8086 2021): vfio-pci -> ioatdma
00:37:20.335  0000:00:04.2 (8086 2021): vfio-pci -> ioatdma
00:37:20.335  0000:00:04.1 (8086 2021): vfio-pci -> ioatdma
00:37:20.595  0000:00:04.0 (8086 2021): vfio-pci -> ioatdma
00:37:20.595  0000:80:04.7 (8086 2021): vfio-pci -> ioatdma
00:37:20.595  0000:80:04.6 (8086 2021): vfio-pci -> ioatdma
00:37:20.854  0000:80:04.5 (8086 2021): vfio-pci -> ioatdma
00:37:20.854  0000:80:04.4 (8086 2021): vfio-pci -> ioatdma
00:37:20.854  0000:80:04.3 (8086 2021): vfio-pci -> ioatdma
00:37:21.114  0000:80:04.2 (8086 2021): vfio-pci -> ioatdma
00:37:21.114  0000:80:04.1 (8086 2021): vfio-pci -> ioatdma
00:37:21.114  0000:80:04.0 (8086 2021): vfio-pci -> ioatdma
00:37:21.372  0000:d8:00.0 (8086 0a54): vfio-pci -> nvme
00:37:24.661  Cleaning
00:37:24.661  Removing:    /var/run/dpdk/spdk0/config
00:37:24.661  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0
00:37:24.661  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1
00:37:24.661  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2
00:37:24.661  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3
00:37:24.661  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0
00:37:24.661  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1
00:37:24.661  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2
00:37:24.661  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3
00:37:24.661  Removing:    /var/run/dpdk/spdk0/fbarray_memzone
00:37:24.661  Removing:    /var/run/dpdk/spdk0/hugepage_info
00:37:24.661  Removing:    /var/run/dpdk/spdk1/config
00:37:24.661  Removing:    /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0
00:37:24.661  Removing:    /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1
00:37:24.661  Removing:    /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2
00:37:24.661  Removing:    /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3
00:37:24.661  Removing:    /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0
00:37:24.661  Removing:    /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1
00:37:24.661  Removing:    /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2
00:37:24.661  Removing:    /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3
00:37:24.661  Removing:    /var/run/dpdk/spdk1/fbarray_memzone
00:37:24.661  Removing:    /var/run/dpdk/spdk1/hugepage_info
00:37:24.661  Removing:    /var/run/dpdk/spdk1/mp_socket
00:37:24.661  Removing:    /var/run/dpdk/spdk2/config
00:37:24.661  Removing:    /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0
00:37:24.661  Removing:    /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1
00:37:24.661  Removing:    /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2
00:37:24.661  Removing:    /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3
00:37:24.661  Removing:    /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0
00:37:24.661  Removing:    /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1
00:37:24.661  Removing:    /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2
00:37:24.661  Removing:    /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3
00:37:24.661  Removing:    /var/run/dpdk/spdk2/fbarray_memzone
00:37:24.661  Removing:    /var/run/dpdk/spdk2/hugepage_info
00:37:24.661  Removing:    /var/run/dpdk/spdk3/config
00:37:24.661  Removing:    /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0
00:37:24.661  Removing:    /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1
00:37:24.661  Removing:    /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2
00:37:24.661  Removing:    /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3
00:37:24.661  Removing:    /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0
00:37:24.661  Removing:    /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1
00:37:24.661  Removing:    /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2
00:37:24.661  Removing:    /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3
00:37:24.661  Removing:    /var/run/dpdk/spdk3/fbarray_memzone
00:37:24.661  Removing:    /var/run/dpdk/spdk3/hugepage_info
00:37:24.661  Removing:    /var/run/dpdk/spdk4/config
00:37:24.661  Removing:    /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0
00:37:24.661  Removing:    /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1
00:37:24.661  Removing:    /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2
00:37:24.661  Removing:    /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3
00:37:24.661  Removing:    /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0
00:37:24.661  Removing:    /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1
00:37:24.661  Removing:    /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2
00:37:24.661  Removing:    /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3
00:37:24.661  Removing:    /var/run/dpdk/spdk4/fbarray_memzone
00:37:24.661  Removing:    /var/run/dpdk/spdk4/hugepage_info
00:37:24.661  Removing:    /dev/shm/bdevperf_trace.pid3167566
00:37:24.661  Removing:    /dev/shm/bdev_svc_trace.1
00:37:24.661  Removing:    /dev/shm/nvmf_trace.0
00:37:24.661  Removing:    /dev/shm/spdk_tgt_trace.pid3111492
00:37:24.661  Removing:    /var/run/dpdk/spdk0
00:37:24.661  Removing:    /var/run/dpdk/spdk1
00:37:24.661  Removing:    /var/run/dpdk/spdk2
00:37:24.661  Removing:    /var/run/dpdk/spdk3
00:37:24.661  Removing:    /var/run/dpdk/spdk4
00:37:24.661  Removing:    /var/run/dpdk/spdk_pid3107135
00:37:24.661  Removing:    /var/run/dpdk/spdk_pid3108930
00:37:24.661  Removing:    /var/run/dpdk/spdk_pid3111492
00:37:24.661  Removing:    /var/run/dpdk/spdk_pid3112480
00:37:24.661  Removing:    /var/run/dpdk/spdk_pid3113837
00:37:24.661  Removing:    /var/run/dpdk/spdk_pid3114385
00:37:24.661  Removing:    /var/run/dpdk/spdk_pid3115771
00:37:24.661  Removing:    /var/run/dpdk/spdk_pid3115925
00:37:24.661  Removing:    /var/run/dpdk/spdk_pid3116710
00:37:24.661  Removing:    /var/run/dpdk/spdk_pid3121821
00:37:24.661  Removing:    /var/run/dpdk/spdk_pid3123542
00:37:24.661  Removing:    /var/run/dpdk/spdk_pid3124413
00:37:24.661  Removing:    /var/run/dpdk/spdk_pid3125142
00:37:24.661  Removing:    /var/run/dpdk/spdk_pid3125884
00:37:24.661  Removing:    /var/run/dpdk/spdk_pid3126568
00:37:24.661  Removing:    /var/run/dpdk/spdk_pid3126870
00:37:24.661  Removing:    /var/run/dpdk/spdk_pid3127155
00:37:24.661  Removing:    /var/run/dpdk/spdk_pid3127569
00:37:24.661  Removing:    /var/run/dpdk/spdk_pid3128505
00:37:24.661  Removing:    /var/run/dpdk/spdk_pid3131942
00:37:24.661  Removing:    /var/run/dpdk/spdk_pid3132608
00:37:24.661  Removing:    /var/run/dpdk/spdk_pid3133341
00:37:24.661  Removing:    /var/run/dpdk/spdk_pid3133607
00:37:24.661  Removing:    /var/run/dpdk/spdk_pid3135352
00:37:24.661  Removing:    /var/run/dpdk/spdk_pid3135525
00:37:24.661  Removing:    /var/run/dpdk/spdk_pid3137296
00:37:24.661  Removing:    /var/run/dpdk/spdk_pid3137431
00:37:24.661  Removing:    /var/run/dpdk/spdk_pid3138184
00:37:24.661  Removing:    /var/run/dpdk/spdk_pid3138268
00:37:24.661  Removing:    /var/run/dpdk/spdk_pid3138836
00:37:24.661  Removing:    /var/run/dpdk/spdk_pid3139098
00:37:24.661  Removing:    /var/run/dpdk/spdk_pid3140589
00:37:24.661  Removing:    /var/run/dpdk/spdk_pid3140981
00:37:24.661  Removing:    /var/run/dpdk/spdk_pid3141447
00:37:24.661  Removing:    /var/run/dpdk/spdk_pid3146402
00:37:24.661  Removing:    /var/run/dpdk/spdk_pid3150940
00:37:24.661  Removing:    /var/run/dpdk/spdk_pid3161657
00:37:24.661  Removing:    /var/run/dpdk/spdk_pid3162474
00:37:24.661  Removing:    /var/run/dpdk/spdk_pid3167566
00:37:24.661  Removing:    /var/run/dpdk/spdk_pid3167898
00:37:24.661  Removing:    /var/run/dpdk/spdk_pid3172657
00:37:24.661  Removing:    /var/run/dpdk/spdk_pid3179077
00:37:24.661  Removing:    /var/run/dpdk/spdk_pid3182072
00:37:24.920  Removing:    /var/run/dpdk/spdk_pid3193683
00:37:24.920  Removing:    /var/run/dpdk/spdk_pid3220433
00:37:24.920  Removing:    /var/run/dpdk/spdk_pid3224758
00:37:24.920  Removing:    /var/run/dpdk/spdk_pid3323540
00:37:24.920  Removing:    /var/run/dpdk/spdk_pid3329113
00:37:24.920  Removing:    /var/run/dpdk/spdk_pid3335097
00:37:24.920  Removing:    /var/run/dpdk/spdk_pid3344899
00:37:24.920  Removing:    /var/run/dpdk/spdk_pid3377427
00:37:24.920  Removing:    /var/run/dpdk/spdk_pid3382574
00:37:24.920  Removing:    /var/run/dpdk/spdk_pid3426129
00:37:24.920  Removing:    /var/run/dpdk/spdk_pid3428005
00:37:24.920  Removing:    /var/run/dpdk/spdk_pid3429954
00:37:24.920  Removing:    /var/run/dpdk/spdk_pid3432131
00:37:24.920  Removing:    /var/run/dpdk/spdk_pid3437194
00:37:24.920  Removing:    /var/run/dpdk/spdk_pid3444269
00:37:24.920  Removing:    /var/run/dpdk/spdk_pid3451961
00:37:24.920  Removing:    /var/run/dpdk/spdk_pid3453030
00:37:24.920  Removing:    /var/run/dpdk/spdk_pid3454102
00:37:24.920  Removing:    /var/run/dpdk/spdk_pid3455204
00:37:24.920  Removing:    /var/run/dpdk/spdk_pid3455690
00:37:24.920  Removing:    /var/run/dpdk/spdk_pid3460725
00:37:24.920  Removing:    /var/run/dpdk/spdk_pid3460729
00:37:24.920  Removing:    /var/run/dpdk/spdk_pid3465545
00:37:24.920  Removing:    /var/run/dpdk/spdk_pid3466279
00:37:24.920  Removing:    /var/run/dpdk/spdk_pid3466863
00:37:24.920  Removing:    /var/run/dpdk/spdk_pid3467665
00:37:24.920  Removing:    /var/run/dpdk/spdk_pid3467688
00:37:24.920  Removing:    /var/run/dpdk/spdk_pid3470331
00:37:24.920  Removing:    /var/run/dpdk/spdk_pid3472186
00:37:24.920  Removing:    /var/run/dpdk/spdk_pid3474586
00:37:24.920  Removing:    /var/run/dpdk/spdk_pid3476440
00:37:24.920  Removing:    /var/run/dpdk/spdk_pid3478291
00:37:24.920  Removing:    /var/run/dpdk/spdk_pid3480252
00:37:24.920  Removing:    /var/run/dpdk/spdk_pid3486781
00:37:24.920  Removing:    /var/run/dpdk/spdk_pid3487434
00:37:24.920  Removing:    /var/run/dpdk/spdk_pid3489716
00:37:24.920  Removing:    /var/run/dpdk/spdk_pid3491175
00:37:24.920  Removing:    /var/run/dpdk/spdk_pid3498891
00:37:24.920  Removing:    /var/run/dpdk/spdk_pid3501797
00:37:24.920  Removing:    /var/run/dpdk/spdk_pid3507856
00:37:24.920  Removing:    /var/run/dpdk/spdk_pid3518990
00:37:24.920  Removing:    /var/run/dpdk/spdk_pid3519027
00:37:24.920  Removing:    /var/run/dpdk/spdk_pid3539515
00:37:24.920  Removing:    /var/run/dpdk/spdk_pid3539835
00:37:24.920  Removing:    /var/run/dpdk/spdk_pid3546203
00:37:24.920  Removing:    /var/run/dpdk/spdk_pid3546780
00:37:24.920  Removing:    /var/run/dpdk/spdk_pid3548496
00:37:24.920  Removing:    /var/run/dpdk/spdk_pid3551220
00:37:24.920  Clean
00:37:25.179   14:03:24  -- common/autotest_common.sh@1453 -- # return 0
00:37:25.179   14:03:24  -- spdk/autotest.sh@389 -- # timing_exit post_cleanup
00:37:25.179   14:03:24  -- common/autotest_common.sh@732 -- # xtrace_disable
00:37:25.179   14:03:24  -- common/autotest_common.sh@10 -- # set +x
00:37:25.179   14:03:24  -- spdk/autotest.sh@391 -- # timing_exit autotest
00:37:25.179   14:03:24  -- common/autotest_common.sh@732 -- # xtrace_disable
00:37:25.179   14:03:24  -- common/autotest_common.sh@10 -- # set +x
00:37:25.179   14:03:24  -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt
00:37:25.179   14:03:24  -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log ]]
00:37:25.179   14:03:24  -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log
00:37:25.179   14:03:24  -- spdk/autotest.sh@396 -- # [[ y == y ]]
00:37:25.179    14:03:24  -- spdk/autotest.sh@398 -- # hostname
00:37:25.179   14:03:24  -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -t spdk-wfp-21 -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info
00:37:25.438  geninfo: WARNING: invalid characters removed from testname!
00:37:47.365   14:03:44  -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info
00:37:47.933   14:03:47  -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info
00:37:49.836   14:03:49  -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info
00:37:51.212   14:03:50  -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info
00:37:53.116   14:03:52  -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info
00:37:54.493   14:03:54  -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info
00:37:56.397   14:03:55  -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR
00:37:56.397   14:03:55  -- spdk/autorun.sh@1 -- $ timing_finish
00:37:56.397   14:03:55  -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt ]]
00:37:56.397   14:03:55  -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl
00:37:56.397   14:03:55  -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]]
00:37:56.397   14:03:55  -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt
00:37:56.397  + [[ -n 3025300 ]]
00:37:56.397  + sudo kill 3025300
00:37:56.406  [Pipeline] }
00:37:56.422  [Pipeline] // stage
00:37:56.427  [Pipeline] }
00:37:56.441  [Pipeline] // timeout
00:37:56.445  [Pipeline] }
00:37:56.459  [Pipeline] // catchError
00:37:56.464  [Pipeline] }
00:37:56.478  [Pipeline] // wrap
00:37:56.484  [Pipeline] }
00:37:56.497  [Pipeline] // catchError
00:37:56.505  [Pipeline] stage
00:37:56.507  [Pipeline] { (Epilogue)
00:37:56.520  [Pipeline] catchError
00:37:56.522  [Pipeline] {
00:37:56.535  [Pipeline] echo
00:37:56.536  Cleanup processes
00:37:56.542  [Pipeline] sh
00:37:56.827  + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk
00:37:56.827  3571907 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk
00:37:56.841  [Pipeline] sh
00:37:57.125  ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk
00:37:57.125  ++ grep -v 'sudo pgrep'
00:37:57.125  ++ awk '{print $1}'
00:37:57.125  + sudo kill -9
00:37:57.125  + true
00:37:57.137  [Pipeline] sh
00:37:57.500  + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh
00:37:57.500  xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB
00:38:02.773  xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB
00:38:06.973  [Pipeline] sh
00:38:07.258  + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh
00:38:07.258  Artifacts sizes are good
00:38:07.272  [Pipeline] archiveArtifacts
00:38:07.280  Archiving artifacts
00:38:07.420  [Pipeline] sh
00:38:07.705  + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-phy-autotest
00:38:07.718  [Pipeline] cleanWs
00:38:07.727  [WS-CLEANUP] Deleting project workspace...
00:38:07.727  [WS-CLEANUP] Deferred wipeout is used...
00:38:07.733  [WS-CLEANUP] done
00:38:07.735  [Pipeline] }
00:38:07.752  [Pipeline] // catchError
00:38:07.763  [Pipeline] sh
00:38:08.044  + logger -p user.info -t JENKINS-CI
00:38:08.053  [Pipeline] }
00:38:08.067  [Pipeline] // stage
00:38:08.072  [Pipeline] }
00:38:08.086  [Pipeline] // node
00:38:08.091  [Pipeline] End of Pipeline
00:38:08.146  Finished: SUCCESS